Multi-label feature selection(MFS)is a crucial dimensionality reduction technique aimed at identifying informative features associated with multiple labels.However,traditional centralized methods face significant chal...Multi-label feature selection(MFS)is a crucial dimensionality reduction technique aimed at identifying informative features associated with multiple labels.However,traditional centralized methods face significant challenges in privacy-sensitive and distributed settings,often neglecting label dependencies and suffering from low computational efficiency.To address these issues,we introduce a novel framework,Fed-MFSDHBCPSO—federated MFS via dual-layer hybrid breeding cooperative particle swarm optimization algorithm with manifold and sparsity regularization(DHBCPSO-MSR).Leveraging the federated learning paradigm,Fed-MFSDHBCPSO allows clients to perform local feature selection(FS)using DHBCPSO-MSR.Locally selected feature subsets are encrypted with differential privacy(DP)and transmitted to a central server,where they are securely aggregated and refined through secure multi-party computation(SMPC)until global convergence is achieved.Within each client,DHBCPSO-MSR employs a dual-layer FS strategy.The inner layer constructs sample and label similarity graphs,generates Laplacian matrices to capture the manifold structure between samples and labels,and applies L2,1-norm regularization to sparsify the feature subset,yielding an optimized feature weight matrix.The outer layer uses a hybrid breeding cooperative particle swarm optimization algorithm to further refine the feature weight matrix and identify the optimal feature subset.The updated weight matrix is then fed back to the inner layer for further optimization.Comprehensive experiments on multiple real-world multi-label datasets demonstrate that Fed-MFSDHBCPSO consistently outperforms both centralized and federated baseline methods across several key evaluation metrics.展开更多
In this study,we present a deterministic convergence analysis of Gated Recurrent Unit(GRU)networks enhanced by a smoothing L_(1)regularization technique.While GRU architectures effectively mitigate gradient vanishing/...In this study,we present a deterministic convergence analysis of Gated Recurrent Unit(GRU)networks enhanced by a smoothing L_(1)regularization technique.While GRU architectures effectively mitigate gradient vanishing/exploding issues in sequential modeling,they remain prone to overfitting,particularly under noisy or limited training data.Traditional L_(1)regularization,despite enforcing sparsity and accelerating optimization,introduces non-differentiable points in the error function,leading to oscillations during training.To address this,we propose a novel smoothing L_(1)regularization framework that replaces the non-differentiable absolute function with a quadratic approximation,ensuring gradient continuity and stabilizing the optimization landscape.Theoretically,we rigorously establish threekey properties of the resulting smoothing L_(1)-regularizedGRU(SL_(1)-GRU)model:(1)monotonic decrease of the error function across iterations,(2)weak convergence characterized by vanishing gradients as iterations approach infinity,and(3)strong convergence of network weights to fixed points under finite conditions.Comprehensive experiments on benchmark datasets-spanning function approximation,classification(KDD Cup 1999 Data,MNIST),and regression tasks(Boston Housing,Energy Efficiency)-demonstrate SL_(1)-GRUs superiority over baseline models(RNN,LSTM,GRU,L_(1)-GRU,L2-GRU).Empirical results reveal that SL_(1)-GRU achieves 1.0%-2.4%higher test accuracy in classification,7.8%-15.4%lower mean squared error in regression compared to unregularized GRU,while reducing training time by 8.7%-20.1%.These outcomes validate the method’s efficacy in balancing computational efficiency and generalization capability,and they strongly corroborate the theoretical calculations.The proposed framework not only resolves the non-differentiability challenge of L_(1)regularization but also provides a theoretical foundation for convergence guarantees in recurrent neural network training.展开更多
Absorption compensation is a process involving the exponential amplification of reflection amplitudes.This process amplifies the seismic signal and noise,thereby substantially reducing the signal-tonoise ratio of seis...Absorption compensation is a process involving the exponential amplification of reflection amplitudes.This process amplifies the seismic signal and noise,thereby substantially reducing the signal-tonoise ratio of seismic data.Therefore,this paper proposes a multichannel inversion absorption compensation method based on structure tensor regularization.First,the structure tensor is utilized to extract the spatial inclination of seismic signals,and the spatial prediction filter is designed along the inclination direction.The spatial prediction filter is then introduced into the regularization condition of multichannel inversion absorption compensation,and the absorption compensation is realized under the framework of multichannel inversion theory.The spatial predictability of seismic signals is also introduced into the objective function of absorption compensation inversion.Thus,the inversion system can effectively suppress the noise amplification effect during absorption compensation and improve the recovery accuracy of high-frequency signals.Synthetic and field data tests are conducted to demonstrate the accuracy and effectiveness of the proposed method.展开更多
Energy resolution calibration is crucial for gamma-ray spectral analysis,as measured using a scintillation detector.A locally constrained regularization method was proposed to determine the resolution calibration para...Energy resolution calibration is crucial for gamma-ray spectral analysis,as measured using a scintillation detector.A locally constrained regularization method was proposed to determine the resolution calibration parameters.First,a Monte Carlo simulation model consistent with an actual measurement system was constructed to obtain the energy deposition distribution in the scintillation crystal.Subsequently,the regularization objective function is established based on weighted least squares and additional constraints.Additional constraints were designed using a special weighting scheme based on the incident gamma-ray energies.Subsequently,an intelligent algorithm was introduced to search for the optimal resolution calibration parameters by minimizing the objective function.The most appropriate regularization parameter was determined through mathematical experiments.When the regularization parameter was 30,the calibrated results exhibited the minimum RMSE.Simulations and test pit experiments were conducted to verify the performance of the proposed method.The simulation results demonstrate that the proposed algorithm can determine resolution calibration parameters more accurately than the traditional weighted least squares,and the test pit experimental results show that the R-squares between the calibrated and measured spectra are larger than 0.99.The accurate resolution calibration parameters determined by the proposed method lay the foundation for gamma-ray spectral processing and simulation benchmarking.展开更多
Target tracking is an essential task in contemporary computer vision applications.However,its effectiveness is susceptible to model drift,due to the different appearances of targets,which often compromises tracking ro...Target tracking is an essential task in contemporary computer vision applications.However,its effectiveness is susceptible to model drift,due to the different appearances of targets,which often compromises tracking robustness and precision.In this paper,a universally applicable method based on correlation filters is introduced to mitigate model drift in complex scenarios.It employs temporal-confidence samples as a priori to guide the model update process and ensure its precision and consistency over a long period.An improved update mechanism based on the peak side-lobe to peak correlation energy(PSPCE)criterion is proposed,which selects high-confidence samples along the temporal dimension to update temporal-confidence samples.Extensive experiments on various benchmarks demonstrate that the proposed method achieves a competitive performance compared with the state-of-the-art methods.Especially when the target appearance changes significantly,our method is more robust and can achieve a balance between precision and speed.Specifically,on the object tracking benchmark(OTB-100)dataset,compared to the baseline,the tracking precision of our model improves by 8.8%,8.8%,5.1%,5.6%,and 6.9%for background clutter,deformation,occlusion,rotation,and illumination variation,respectively.The results indicate that this proposed method can significantly enhance the robustness and precision of target tracking in dynamic and challenging environments,offering a reliable solution for applications such as real-time monitoring,autonomous driving,and precision guidance.展开更多
Full waveform inversion is a precise method for parameter inversion,harnessing the complete wavefield information of seismic waves.It holds the potential to intricately characterize the detailed features of the model ...Full waveform inversion is a precise method for parameter inversion,harnessing the complete wavefield information of seismic waves.It holds the potential to intricately characterize the detailed features of the model with high accuracy.However,due to inaccurate initial models,the absence of low-frequency data,and incomplete observational data,full waveform inversion(FWI)exhibits pronounced nonlinear characteristics.When the strata are buried deep,the inversion capability of this method is constrained.To enhance the accuracy and precision of FWI,this paper introduces a novel approach to address the aforementioned challenges—namely,a fractional-order anisotropic total p-variation regularization for full waveform inversion(FATpV-FWI).This method incorporates fractional-order total variation(TV)regularization to construct the inversion objective function,building upon TV regularization,and subsequently employs the alternating direction multiplier method for solving.This approach mitigates the step effect stemming from total variation in seismic inversion,thereby facilitating the reconstruction of sharp interfaces of geophysical parameters while smoothing background variations.Simultaneously,replacing integer-order differences with fractional-order differences bolsters the correlation among seismic data and diminishes the scattering effect caused by integer-order differences in seismic inversion.The outcomes of model tests validate the efficacy of this method,highlighting its ability to enhance the overall accuracy of the inversion process.展开更多
Modern warfare demands weapons capable of penetrating substantial structures,which presents sig-nificant challenges to the reliability of the electronic devices that are crucial to the weapon's perfor-mance.Due to...Modern warfare demands weapons capable of penetrating substantial structures,which presents sig-nificant challenges to the reliability of the electronic devices that are crucial to the weapon's perfor-mance.Due to miniaturization of electronic components,it is challenging to directly measure or numerically predict the mechanical response of small-sized critical interconnections in board-level packaging structures to ensure the mechanical reliability of electronic devices in projectiles under harsh working conditions.To address this issue,an indirect measurement method using the Bayesian regularization-based load identification was proposed in this study based on finite element(FE)pre-dictions to estimate the load applied on critical interconnections of board-level packaging structures during the process of projectile penetration.For predicting the high-strain-rate penetration process,an FE model was established with elasto-plastic constitutive models of the representative packaging ma-terials(that is,solder material and epoxy molding compound)in which material constitutive parameters were calibrated against the experimental results by using the split-Hopkinson pressure bar.As the impact-induced dynamic bending of the printed circuit board resulted in an alternating tensile-compressive loading on the solder joints during penetration,the corner solder joints in the edge re-gions experience the highest S11 and strain,making them more prone to failure.Based on FE predictions at different structural scales,an improved Bayesian method based on augmented Tikhonov regulariza-tion was theoretically proposed to address the issues of ill-posed matrix inversion and noise sensitivity in the load identification at the critical solder joints.By incorporating a wavelet thresholding technique,the method resolves the problem of poor load identification accuracy at high noise levels.The proposed method achieves satisfactorily small relative errors and high correlation coefficients in identifying the mechanical response of local interconnections in board-level packaging structures,while significantly balancing the smoothness of response curves with the accuracy of peak identification.At medium and low noise levels,the relative error is less than 6%,while it is less than 10%at high noise levels.The proposed method provides an effective indirect approach for the boundary conditions of localized solder joints during the projectile penetration process,and its philosophy can be readily extended to other scenarios of multiscale analysis for highly nonlinear materials and structures under extreme loading conditions.展开更多
Owing to the constraints of depth sensing technology,images acquired by depth cameras are inevitably mixed with various noises.For depth maps presented in gray values,this research proposes a novel denoising model,ter...Owing to the constraints of depth sensing technology,images acquired by depth cameras are inevitably mixed with various noises.For depth maps presented in gray values,this research proposes a novel denoising model,termed graph-based transform(GBT)and dual graph Laplacian regularization(DGLR)(DGLR-GBT).This model specifically aims to remove Gaussian white noise by capitalizing on the nonlocal self-similarity(NSS)and the piecewise smoothness properties intrinsic to depth maps.Within the group sparse coding(GSC)framework,a combination of GBT and DGLR is implemented.Firstly,within each group,the graph is constructed by using estimates of the true values of the averaged blocks instead of the observations.Secondly,the graph Laplacian regular terms are constructed based on rows and columns of similar block groups,respectively.Lastly,the solution is obtained effectively by combining the alternating direction multiplication method(ADMM)with the weighted thresholding method within the domain of GBT.展开更多
We use the extrapolated Tikhonov regularization to deal with the ill-posed problem of 3D density inversion of gravity gradient data. The use of regularization parameters in the proposed method reduces the deviations b...We use the extrapolated Tikhonov regularization to deal with the ill-posed problem of 3D density inversion of gravity gradient data. The use of regularization parameters in the proposed method reduces the deviations between calculated and observed data. We also use the depth weighting function based on the eigenvector of gravity gradient tensor to eliminate undesired effects owing to the fast attenuation of the position function. Model data suggest that the extrapolated Tikhonov regularization in conjunction with the depth weighting function can effectively recover the 3D distribution of density anomalies. We conduct density inversion of gravity gradient data from the Australia Kauring test site and compare the inversion results with the published research results. The proposed inversion method can be used to obtain the 3D density distribution of underground anomalies.展开更多
In order to decrease the sensitivity of the constant scale parameter, adaptively optimize the scale parameter in the iteration regularization model (IRM) and attain a desirable level of applicability for image denoi...In order to decrease the sensitivity of the constant scale parameter, adaptively optimize the scale parameter in the iteration regularization model (IRM) and attain a desirable level of applicability for image denoising, a novel IRM with the adaptive scale parameter is proposed. First, the classic regularization item is modified and the equation of the adaptive scale parameter is deduced. Then, the initial value of the varying scale parameter is obtained by the trend of the number of iterations and the scale parameter sequence vectors. Finally, the novel iterative regularization method is used for image denoising. Numerical experiments show that compared with the IRM with the constant scale parameter, the proposed method with the varying scale parameter can not only reduce the number of iterations when the scale parameter becomes smaller, but also efficiently remove noise when the scale parameter becomes bigger and well preserve the details of images.展开更多
According to the conclusion of the simulation experiments in paper I, the Tikhonov regularization method is applied to cyclone wind retrieval with a rain-effect-considering geophysical model function (called CMF+Rai...According to the conclusion of the simulation experiments in paper I, the Tikhonov regularization method is applied to cyclone wind retrieval with a rain-effect-considering geophysical model function (called CMF+Rain). The CMF+Rain model which is based on the NASA scatterometer-2 (NSCAT2) GMF is presented to compensate for the effects of rain on cyclone wind retrieval. With the multiple solution scheme (MSS), the noise of wind retrieval is effectively suppressed, but the influence of the background increases. It will cause a large wind direction error in ambiguity removal when the background error is large. However, this can be mitigated by the new ambiguity removal method of Tikhonov regularization as proved in the simulation experiments. A case study on an extratropical cyclone of hurricane observed with SeaWinds at 25-km resolution shows that the retrieved wind speed for areas with rain is in better agreement with that derived from the best track analysis for the GMF+Rain model, but the wind direction obtained with the two-dimensional variational (2DVAR) ambiguity removal is incorrect. The new method of Tikhonov regularization effectively improves the performance of wind direction ambiguity removal through choosing appropriate regularization parameters and the retrieved wind speed is almost the same as that obtained from the 2DVAR.展开更多
A new normalized least mean square(NLMS) adaptive filter is first derived from a cost function, which incorporates the conventional one of the NLMS with a minimum-disturbance(MD)constraint. A variable regularization f...A new normalized least mean square(NLMS) adaptive filter is first derived from a cost function, which incorporates the conventional one of the NLMS with a minimum-disturbance(MD)constraint. A variable regularization factor(RF) is then employed to control the contribution made by the MD constraint in the cost function. Analysis results show that the RF can be taken as a combination of the step size and regularization parameter in the conventional NLMS. This implies that these parameters can be jointly controlled by simply tuning the RF as the proposed algorithm does. It also demonstrates that the RF can accelerate the convergence rate of the proposed algorithm and its optimal value can be obtained by minimizing the squared noise-free posteriori error. A method for automatically determining the value of the RF is also presented, which is free of any prior knowledge of the noise. While simulation results verify the analytical ones, it is also illustrated that the performance of the proposed algorithm is superior to the state-of-art ones in both the steady-state misalignment and the convergence rate. A novel algorithm is proposed to solve some problems. Simulation results show the effectiveness of the proposed algorithm.展开更多
Inversion of Young’s modulus,Poisson’s ratio and density from pre-stack seismic data has been proved to be feasible and effective.However,the existing methods do not take full advantage of the prior information.With...Inversion of Young’s modulus,Poisson’s ratio and density from pre-stack seismic data has been proved to be feasible and effective.However,the existing methods do not take full advantage of the prior information.Without considering the lateral continuity of the inversion results,these methods need to invert the reflectivity first.In this paper,we propose multi-gather simultaneous inversion for pre-stack seismic data.Meanwhile,the total variation(TV)regularization,L1 norm regularization and initial model constraint are used.In order to solve the objective function contains L1norm,TV norm and L2 norm,we develop an algorithm based on split Bregman iteration.The main advantages of our method are as follows:(1)The elastic parameters are calculated directly from objective function rather than from their reflectivity,therefore the stability and accuracy of the inversion process can be ensured.(2)The inversion results are more in accordance with the prior geological information.(3)The lateral continuity of the inversion results are improved.The proposed method is illustrated by theoretical model data and experimented with a 2-D field data.展开更多
The simplified linear model of Grad-Shafranov (GS) reconstruction can be reformulated into an inverse boundary value problem of Laplace's equation. Therefore, in this paper we focus on the method of solving the inv...The simplified linear model of Grad-Shafranov (GS) reconstruction can be reformulated into an inverse boundary value problem of Laplace's equation. Therefore, in this paper we focus on the method of solving the inverse boundary value problem of Laplace's equation. In the first place, the variational regularization method is used to deal with the ill- posedness of the Cauchy problem for Laplace's equation. Then, the 'L-Curve' principle is suggested to be adopted in choosing the optimal regularization parameter. Finally, a numerical experiment is implemented with a section of Neumann and Dirichlet boundary conditions with observation errors. The results well converge to the exact solution of the problem, which proves the efficiency and robustness of the proposed method. When the order of observation error δ is 10-1, the order of the approximate result error can reach 10-3.展开更多
The accurate material physical properties, initial and boundary conditions are indispensable to the numerical simulation in the casting process, and they are related to the simulation accuracy directly. The inverse he...The accurate material physical properties, initial and boundary conditions are indispensable to the numerical simulation in the casting process, and they are related to the simulation accuracy directly. The inverse heat conduction method can be used to identify the mentioned above parameters based on the temperature measurement data. This paper presented a new inverse method according to Tikhonov regularization theory. A regularization functional was established and the regularization parameter was deduced, the Newton-Raphson iteration method was used to solve the equations. One detailed case was solved to identify the thermal conductivity and specific heat of sand mold and interfacial heat transfer coefficient (IHTC) at the meantime. This indicates that the regularization method is very efficient in decreasing the sensitivity to the temperature measurement data, overcoming the ill-posedness of the inverse heat conduction problem (IHCP) and improving the stability and accuracy of the results. As a general inverse method, it can be used to identify not only the material physical properties but also the initial and boundary conditions' parameters.展开更多
Downward continuation is a key step in processing airborne geomagnetic data. However,downward continuation is a typically ill-posed problem because its computation is unstable; thus, regularization methods are needed ...Downward continuation is a key step in processing airborne geomagnetic data. However,downward continuation is a typically ill-posed problem because its computation is unstable; thus, regularization methods are needed to realize effective continuation. According to the Poisson integral plane approximate relationship between observation and continuation data, the computation formulae combined with the fast Fourier transform(FFT)algorithm are transformed to a frequency domain for accelerating the computational speed. The iterative Tikhonov regularization method and the iterative Landweber regularization method are used in this paper to overcome instability and improve the precision of the results. The availability of these two iterative regularization methods in the frequency domain is validated by simulated geomagnetic data, and the continuation results show good precision.展开更多
In this paper, we continue to construct stationary classical solutions for the incompressible planar flows approximating singular stationary solutions of this problem. This procedure is carried out by constructing sol...In this paper, we continue to construct stationary classical solutions for the incompressible planar flows approximating singular stationary solutions of this problem. This procedure is carried out by constructing solutions for the following elliptic equations{-△u=λ∑1Bδ(x0,j)(u-kj)p+,in Ω,u=0,onΩ is a bounded simply-connected smooth domain, ki (i = 1,… , k) is prescribed positive constant. The result we prove is that for any given non-degenerate critical pointX0=(x0,1,…,x0,k of the Kirchhoff-Routh function defined on Ωk corresponding to ( k1,……kk )there exists a stationary classical solution approximating stationary k points vortex solution. Moreover, as λ→+∞ shrinks to {x05}, and the local vorticity strength near each x0,j approaches kj, j = 1,… , k. This result makes the study of the above problem with p _〉 0 complete since the cases p 〉 1, p = 1, p = 0 have already been studied in [11, 12] and [13] respectively.展开更多
In this paper, the Tikhonov regularization method was used to solve the nondegenerate compact hnear operator equation, which is a well-known ill-posed problem. Apart from the usual error level, the noise data were sup...In this paper, the Tikhonov regularization method was used to solve the nondegenerate compact hnear operator equation, which is a well-known ill-posed problem. Apart from the usual error level, the noise data were supposed to satisfy some additional monotonic condition. Moreover, with the assumption that the singular values of operator have power form, the improved convergence rates of the regularized solution were worked out.展开更多
Bathymetry data are usually obtained via single-beam or multibeam sounding;however,these methods exhibit low efficiency and coverage and are dependent on various parameters,including the condition of the vessel and se...Bathymetry data are usually obtained via single-beam or multibeam sounding;however,these methods exhibit low efficiency and coverage and are dependent on various parameters,including the condition of the vessel and sea state.To overcome these limitations,we propose a method for marine bathymetry inversion based on the satellite altimetry gravity anomaly data as a modification of the gravity-geologic method(GGM),which is a conventional terrain inversion method based on gravity data.In accordance with its principle,the modified method adopts a rectangular prism model for modeling the short-wavelength gravity anomaly and the Tikhonov regularization method to integrate the geophysical constraints,including the a priori water depth data and characteristics of the sea bottom relief.The a priori water depth data can be obtained based on the measurement data obtained from a ship,borehole information,etc.,and the existing bathymetry/terrain model can be considered as the initial model.Marquardt’s method is used during the inversion process,and the regularization parameter can be adaptively determined.The model test and application to the West Philippine Basin indicate the feasibility and eff ectiveness of the proposed method.The results indicate the capability of the proposed method to improve the overall accuracy of the water depth data.Then,the proposed method can be used to conduct a preliminary study of the ocean depths.Additionally,the results show that in the improved GGM,the density diff erence parameter has lost its original physical meaning,and it will not have a great impact on the inversion process.Based on the boundedness of the study area,the inversion result may exhibit a lower confi dence level near the margin than that near the center.Furthermore,the modifi ed GGM is time-and memory-intensive when compared with the conventional GGM.展开更多
This article is devoted to the regularization of nonlinear ill-posed problems with accretive operators in Banach spaces. The data involved are assumed to be known approximately. The authors concentrate their discussio...This article is devoted to the regularization of nonlinear ill-posed problems with accretive operators in Banach spaces. The data involved are assumed to be known approximately. The authors concentrate their discussion on the convergence rates of regular solutions.展开更多
文摘Multi-label feature selection(MFS)is a crucial dimensionality reduction technique aimed at identifying informative features associated with multiple labels.However,traditional centralized methods face significant challenges in privacy-sensitive and distributed settings,often neglecting label dependencies and suffering from low computational efficiency.To address these issues,we introduce a novel framework,Fed-MFSDHBCPSO—federated MFS via dual-layer hybrid breeding cooperative particle swarm optimization algorithm with manifold and sparsity regularization(DHBCPSO-MSR).Leveraging the federated learning paradigm,Fed-MFSDHBCPSO allows clients to perform local feature selection(FS)using DHBCPSO-MSR.Locally selected feature subsets are encrypted with differential privacy(DP)and transmitted to a central server,where they are securely aggregated and refined through secure multi-party computation(SMPC)until global convergence is achieved.Within each client,DHBCPSO-MSR employs a dual-layer FS strategy.The inner layer constructs sample and label similarity graphs,generates Laplacian matrices to capture the manifold structure between samples and labels,and applies L2,1-norm regularization to sparsify the feature subset,yielding an optimized feature weight matrix.The outer layer uses a hybrid breeding cooperative particle swarm optimization algorithm to further refine the feature weight matrix and identify the optimal feature subset.The updated weight matrix is then fed back to the inner layer for further optimization.Comprehensive experiments on multiple real-world multi-label datasets demonstrate that Fed-MFSDHBCPSO consistently outperforms both centralized and federated baseline methods across several key evaluation metrics.
基金supported by the National Science Fund for Distinguished Young Scholarship(No.62025602)National Natural Science Foundation of China(Nos.U22B2036,11931015)+2 种基金the Fok Ying-Tong Education Foundation China(No.171105)the Fundamental Research Funds for the Central Universities(No.G2024WD0151)in part by the Tencent Foundation and XPLORER PRIZE.
文摘In this study,we present a deterministic convergence analysis of Gated Recurrent Unit(GRU)networks enhanced by a smoothing L_(1)regularization technique.While GRU architectures effectively mitigate gradient vanishing/exploding issues in sequential modeling,they remain prone to overfitting,particularly under noisy or limited training data.Traditional L_(1)regularization,despite enforcing sparsity and accelerating optimization,introduces non-differentiable points in the error function,leading to oscillations during training.To address this,we propose a novel smoothing L_(1)regularization framework that replaces the non-differentiable absolute function with a quadratic approximation,ensuring gradient continuity and stabilizing the optimization landscape.Theoretically,we rigorously establish threekey properties of the resulting smoothing L_(1)-regularizedGRU(SL_(1)-GRU)model:(1)monotonic decrease of the error function across iterations,(2)weak convergence characterized by vanishing gradients as iterations approach infinity,and(3)strong convergence of network weights to fixed points under finite conditions.Comprehensive experiments on benchmark datasets-spanning function approximation,classification(KDD Cup 1999 Data,MNIST),and regression tasks(Boston Housing,Energy Efficiency)-demonstrate SL_(1)-GRUs superiority over baseline models(RNN,LSTM,GRU,L_(1)-GRU,L2-GRU).Empirical results reveal that SL_(1)-GRU achieves 1.0%-2.4%higher test accuracy in classification,7.8%-15.4%lower mean squared error in regression compared to unregularized GRU,while reducing training time by 8.7%-20.1%.These outcomes validate the method’s efficacy in balancing computational efficiency and generalization capability,and they strongly corroborate the theoretical calculations.The proposed framework not only resolves the non-differentiability challenge of L_(1)regularization but also provides a theoretical foundation for convergence guarantees in recurrent neural network training.
基金funded by the National Key R&D Program of China(Grant no.2018YFA0702504)the Sinopec research project(P22162).
文摘Absorption compensation is a process involving the exponential amplification of reflection amplitudes.This process amplifies the seismic signal and noise,thereby substantially reducing the signal-tonoise ratio of seismic data.Therefore,this paper proposes a multichannel inversion absorption compensation method based on structure tensor regularization.First,the structure tensor is utilized to extract the spatial inclination of seismic signals,and the spatial prediction filter is designed along the inclination direction.The spatial prediction filter is then introduced into the regularization condition of multichannel inversion absorption compensation,and the absorption compensation is realized under the framework of multichannel inversion theory.The spatial predictability of seismic signals is also introduced into the objective function of absorption compensation inversion.Thus,the inversion system can effectively suppress the noise amplification effect during absorption compensation and improve the recovery accuracy of high-frequency signals.Synthetic and field data tests are conducted to demonstrate the accuracy and effectiveness of the proposed method.
基金supported by the National Natural Science Foundation of China(No.41804141)。
文摘Energy resolution calibration is crucial for gamma-ray spectral analysis,as measured using a scintillation detector.A locally constrained regularization method was proposed to determine the resolution calibration parameters.First,a Monte Carlo simulation model consistent with an actual measurement system was constructed to obtain the energy deposition distribution in the scintillation crystal.Subsequently,the regularization objective function is established based on weighted least squares and additional constraints.Additional constraints were designed using a special weighting scheme based on the incident gamma-ray energies.Subsequently,an intelligent algorithm was introduced to search for the optimal resolution calibration parameters by minimizing the objective function.The most appropriate regularization parameter was determined through mathematical experiments.When the regularization parameter was 30,the calibrated results exhibited the minimum RMSE.Simulations and test pit experiments were conducted to verify the performance of the proposed method.The simulation results demonstrate that the proposed algorithm can determine resolution calibration parameters more accurately than the traditional weighted least squares,and the test pit experimental results show that the R-squares between the calibrated and measured spectra are larger than 0.99.The accurate resolution calibration parameters determined by the proposed method lay the foundation for gamma-ray spectral processing and simulation benchmarking.
基金supported by the Natural Science Foundation of Sichuan Province of China under Grant No.2025ZNSFSC0522partially supported by the National Natural Science Foundation of China under Grants No.61775030 and No.61571096.
文摘Target tracking is an essential task in contemporary computer vision applications.However,its effectiveness is susceptible to model drift,due to the different appearances of targets,which often compromises tracking robustness and precision.In this paper,a universally applicable method based on correlation filters is introduced to mitigate model drift in complex scenarios.It employs temporal-confidence samples as a priori to guide the model update process and ensure its precision and consistency over a long period.An improved update mechanism based on the peak side-lobe to peak correlation energy(PSPCE)criterion is proposed,which selects high-confidence samples along the temporal dimension to update temporal-confidence samples.Extensive experiments on various benchmarks demonstrate that the proposed method achieves a competitive performance compared with the state-of-the-art methods.Especially when the target appearance changes significantly,our method is more robust and can achieve a balance between precision and speed.Specifically,on the object tracking benchmark(OTB-100)dataset,compared to the baseline,the tracking precision of our model improves by 8.8%,8.8%,5.1%,5.6%,and 6.9%for background clutter,deformation,occlusion,rotation,and illumination variation,respectively.The results indicate that this proposed method can significantly enhance the robustness and precision of target tracking in dynamic and challenging environments,offering a reliable solution for applications such as real-time monitoring,autonomous driving,and precision guidance.
基金supported by the China Postdoctoral Science Foundation(Grant No.2024MF750281)the Postdoctoral Fellowship Program of CPSF(Grant No.GZC20230326)+1 种基金the Natural Science Foundation Project of Sichuan Province(Grant No.2025ZNSFSC1170)Sichuan Science and Technology Program(Grant No.2023ZYD0158).
文摘Full waveform inversion is a precise method for parameter inversion,harnessing the complete wavefield information of seismic waves.It holds the potential to intricately characterize the detailed features of the model with high accuracy.However,due to inaccurate initial models,the absence of low-frequency data,and incomplete observational data,full waveform inversion(FWI)exhibits pronounced nonlinear characteristics.When the strata are buried deep,the inversion capability of this method is constrained.To enhance the accuracy and precision of FWI,this paper introduces a novel approach to address the aforementioned challenges—namely,a fractional-order anisotropic total p-variation regularization for full waveform inversion(FATpV-FWI).This method incorporates fractional-order total variation(TV)regularization to construct the inversion objective function,building upon TV regularization,and subsequently employs the alternating direction multiplier method for solving.This approach mitigates the step effect stemming from total variation in seismic inversion,thereby facilitating the reconstruction of sharp interfaces of geophysical parameters while smoothing background variations.Simultaneously,replacing integer-order differences with fractional-order differences bolsters the correlation among seismic data and diminishes the scattering effect caused by integer-order differences in seismic inversion.The outcomes of model tests validate the efficacy of this method,highlighting its ability to enhance the overall accuracy of the inversion process.
基金supported by the National Natural Science Foundation of China(Grant Nos.52475166,52175148)the Regional Collaboration Project of Shanxi Province(Grant No.202204041101044).
文摘Modern warfare demands weapons capable of penetrating substantial structures,which presents sig-nificant challenges to the reliability of the electronic devices that are crucial to the weapon's perfor-mance.Due to miniaturization of electronic components,it is challenging to directly measure or numerically predict the mechanical response of small-sized critical interconnections in board-level packaging structures to ensure the mechanical reliability of electronic devices in projectiles under harsh working conditions.To address this issue,an indirect measurement method using the Bayesian regularization-based load identification was proposed in this study based on finite element(FE)pre-dictions to estimate the load applied on critical interconnections of board-level packaging structures during the process of projectile penetration.For predicting the high-strain-rate penetration process,an FE model was established with elasto-plastic constitutive models of the representative packaging ma-terials(that is,solder material and epoxy molding compound)in which material constitutive parameters were calibrated against the experimental results by using the split-Hopkinson pressure bar.As the impact-induced dynamic bending of the printed circuit board resulted in an alternating tensile-compressive loading on the solder joints during penetration,the corner solder joints in the edge re-gions experience the highest S11 and strain,making them more prone to failure.Based on FE predictions at different structural scales,an improved Bayesian method based on augmented Tikhonov regulariza-tion was theoretically proposed to address the issues of ill-posed matrix inversion and noise sensitivity in the load identification at the critical solder joints.By incorporating a wavelet thresholding technique,the method resolves the problem of poor load identification accuracy at high noise levels.The proposed method achieves satisfactorily small relative errors and high correlation coefficients in identifying the mechanical response of local interconnections in board-level packaging structures,while significantly balancing the smoothness of response curves with the accuracy of peak identification.At medium and low noise levels,the relative error is less than 6%,while it is less than 10%at high noise levels.The proposed method provides an effective indirect approach for the boundary conditions of localized solder joints during the projectile penetration process,and its philosophy can be readily extended to other scenarios of multiscale analysis for highly nonlinear materials and structures under extreme loading conditions.
基金National Natural Science Foundation of China(No.62372100)。
文摘Owing to the constraints of depth sensing technology,images acquired by depth cameras are inevitably mixed with various noises.For depth maps presented in gray values,this research proposes a novel denoising model,termed graph-based transform(GBT)and dual graph Laplacian regularization(DGLR)(DGLR-GBT).This model specifically aims to remove Gaussian white noise by capitalizing on the nonlocal self-similarity(NSS)and the piecewise smoothness properties intrinsic to depth maps.Within the group sparse coding(GSC)framework,a combination of GBT and DGLR is implemented.Firstly,within each group,the graph is constructed by using estimates of the true values of the averaged blocks instead of the observations.Secondly,the graph Laplacian regular terms are constructed based on rows and columns of similar block groups,respectively.Lastly,the solution is obtained effectively by combining the alternating direction multiplication method(ADMM)with the weighted thresholding method within the domain of GBT.
基金supported by National major special equipment development(No.2011YQ120045)The National Natural Science Fund(No.41074050 and 41304023)
文摘We use the extrapolated Tikhonov regularization to deal with the ill-posed problem of 3D density inversion of gravity gradient data. The use of regularization parameters in the proposed method reduces the deviations between calculated and observed data. We also use the depth weighting function based on the eigenvector of gravity gradient tensor to eliminate undesired effects owing to the fast attenuation of the position function. Model data suggest that the extrapolated Tikhonov regularization in conjunction with the depth weighting function can effectively recover the 3D distribution of density anomalies. We conduct density inversion of gravity gradient data from the Australia Kauring test site and compare the inversion results with the published research results. The proposed inversion method can be used to obtain the 3D density distribution of underground anomalies.
基金The National Natural Science Foundation of China(No.60702069)the Research Project of Department of Education of Zhe-jiang Province (No.20060601)+1 种基金the Natural Science Foundation of Zhe-jiang Province (No.Y1080851)Shanghai International Cooperation onRegion of France (No.06SR07109)
文摘In order to decrease the sensitivity of the constant scale parameter, adaptively optimize the scale parameter in the iteration regularization model (IRM) and attain a desirable level of applicability for image denoising, a novel IRM with the adaptive scale parameter is proposed. First, the classic regularization item is modified and the equation of the adaptive scale parameter is deduced. Then, the initial value of the varying scale parameter is obtained by the trend of the number of iterations and the scale parameter sequence vectors. Finally, the novel iterative regularization method is used for image denoising. Numerical experiments show that compared with the IRM with the constant scale parameter, the proposed method with the varying scale parameter can not only reduce the number of iterations when the scale parameter becomes smaller, but also efficiently remove noise when the scale parameter becomes bigger and well preserve the details of images.
基金Project supported by the National Natural Science Foundation of China (Grant No. 40775023)
文摘According to the conclusion of the simulation experiments in paper I, the Tikhonov regularization method is applied to cyclone wind retrieval with a rain-effect-considering geophysical model function (called CMF+Rain). The CMF+Rain model which is based on the NASA scatterometer-2 (NSCAT2) GMF is presented to compensate for the effects of rain on cyclone wind retrieval. With the multiple solution scheme (MSS), the noise of wind retrieval is effectively suppressed, but the influence of the background increases. It will cause a large wind direction error in ambiguity removal when the background error is large. However, this can be mitigated by the new ambiguity removal method of Tikhonov regularization as proved in the simulation experiments. A case study on an extratropical cyclone of hurricane observed with SeaWinds at 25-km resolution shows that the retrieved wind speed for areas with rain is in better agreement with that derived from the best track analysis for the GMF+Rain model, but the wind direction obtained with the two-dimensional variational (2DVAR) ambiguity removal is incorrect. The new method of Tikhonov regularization effectively improves the performance of wind direction ambiguity removal through choosing appropriate regularization parameters and the retrieved wind speed is almost the same as that obtained from the 2DVAR.
基金supported by the National Natural Science Foundation of China(61571131 11604055)
文摘A new normalized least mean square(NLMS) adaptive filter is first derived from a cost function, which incorporates the conventional one of the NLMS with a minimum-disturbance(MD)constraint. A variable regularization factor(RF) is then employed to control the contribution made by the MD constraint in the cost function. Analysis results show that the RF can be taken as a combination of the step size and regularization parameter in the conventional NLMS. This implies that these parameters can be jointly controlled by simply tuning the RF as the proposed algorithm does. It also demonstrates that the RF can accelerate the convergence rate of the proposed algorithm and its optimal value can be obtained by minimizing the squared noise-free posteriori error. A method for automatically determining the value of the RF is also presented, which is free of any prior knowledge of the noise. While simulation results verify the analytical ones, it is also illustrated that the performance of the proposed algorithm is superior to the state-of-art ones in both the steady-state misalignment and the convergence rate. A novel algorithm is proposed to solve some problems. Simulation results show the effectiveness of the proposed algorithm.
基金supported by the National Natural Science Foundation of China (Nos.61775030,61571096,41301460,61362018,and 41274127)the key projects of Hunan Provincial Department of Education (No.16A174)
文摘Inversion of Young’s modulus,Poisson’s ratio and density from pre-stack seismic data has been proved to be feasible and effective.However,the existing methods do not take full advantage of the prior information.Without considering the lateral continuity of the inversion results,these methods need to invert the reflectivity first.In this paper,we propose multi-gather simultaneous inversion for pre-stack seismic data.Meanwhile,the total variation(TV)regularization,L1 norm regularization and initial model constraint are used.In order to solve the objective function contains L1norm,TV norm and L2 norm,we develop an algorithm based on split Bregman iteration.The main advantages of our method are as follows:(1)The elastic parameters are calculated directly from objective function rather than from their reflectivity,therefore the stability and accuracy of the inversion process can be ensured.(2)The inversion results are more in accordance with the prior geological information.(3)The lateral continuity of the inversion results are improved.The proposed method is illustrated by theoretical model data and experimented with a 2-D field data.
基金Project supported by the National Natural Science Foundation of China(Grant No.41175025)
文摘The simplified linear model of Grad-Shafranov (GS) reconstruction can be reformulated into an inverse boundary value problem of Laplace's equation. Therefore, in this paper we focus on the method of solving the inverse boundary value problem of Laplace's equation. In the first place, the variational regularization method is used to deal with the ill- posedness of the Cauchy problem for Laplace's equation. Then, the 'L-Curve' principle is suggested to be adopted in choosing the optimal regularization parameter. Finally, a numerical experiment is implemented with a section of Neumann and Dirichlet boundary conditions with observation errors. The results well converge to the exact solution of the problem, which proves the efficiency and robustness of the proposed method. When the order of observation error δ is 10-1, the order of the approximate result error can reach 10-3.
文摘The accurate material physical properties, initial and boundary conditions are indispensable to the numerical simulation in the casting process, and they are related to the simulation accuracy directly. The inverse heat conduction method can be used to identify the mentioned above parameters based on the temperature measurement data. This paper presented a new inverse method according to Tikhonov regularization theory. A regularization functional was established and the regularization parameter was deduced, the Newton-Raphson iteration method was used to solve the equations. One detailed case was solved to identify the thermal conductivity and specific heat of sand mold and interfacial heat transfer coefficient (IHTC) at the meantime. This indicates that the regularization method is very efficient in decreasing the sensitivity to the temperature measurement data, overcoming the ill-posedness of the inverse heat conduction problem (IHCP) and improving the stability and accuracy of the results. As a general inverse method, it can be used to identify not only the material physical properties but also the initial and boundary conditions' parameters.
基金supported by the National Natural Science Foundation of China(41304022,41174026,41104047)the National 973 Foundation(61322201,2013CB733303)+1 种基金the Key laboratory Foundation of Geo-space Environment and Geodesy of the Ministry of Education(13-01-08)the Youth Innovation Foundation of High Resolution Earth Observation(GFZX04060103-5-12)
文摘Downward continuation is a key step in processing airborne geomagnetic data. However,downward continuation is a typically ill-posed problem because its computation is unstable; thus, regularization methods are needed to realize effective continuation. According to the Poisson integral plane approximate relationship between observation and continuation data, the computation formulae combined with the fast Fourier transform(FFT)algorithm are transformed to a frequency domain for accelerating the computational speed. The iterative Tikhonov regularization method and the iterative Landweber regularization method are used in this paper to overcome instability and improve the precision of the results. The availability of these two iterative regularization methods in the frequency domain is validated by simulated geomagnetic data, and the continuation results show good precision.
文摘In this paper, we continue to construct stationary classical solutions for the incompressible planar flows approximating singular stationary solutions of this problem. This procedure is carried out by constructing solutions for the following elliptic equations{-△u=λ∑1Bδ(x0,j)(u-kj)p+,in Ω,u=0,onΩ is a bounded simply-connected smooth domain, ki (i = 1,… , k) is prescribed positive constant. The result we prove is that for any given non-degenerate critical pointX0=(x0,1,…,x0,k of the Kirchhoff-Routh function defined on Ωk corresponding to ( k1,……kk )there exists a stationary classical solution approximating stationary k points vortex solution. Moreover, as λ→+∞ shrinks to {x05}, and the local vorticity strength near each x0,j approaches kj, j = 1,… , k. This result makes the study of the above problem with p _〉 0 complete since the cases p 〉 1, p = 1, p = 0 have already been studied in [11, 12] and [13] respectively.
文摘In this paper, the Tikhonov regularization method was used to solve the nondegenerate compact hnear operator equation, which is a well-known ill-posed problem. Apart from the usual error level, the noise data were supposed to satisfy some additional monotonic condition. Moreover, with the assumption that the singular values of operator have power form, the improved convergence rates of the regularized solution were worked out.
基金the National Natural Science Foundation of China(Nos.91858212 and U1505232)the Special Project of the National Program on Global Change and Air-Sea Interaction(No.GASI-GEOGE-1)+1 种基金the Supporting Project of the Youth Marine Science Foundation of East China Sea Branch of State Oceanic Administration(No.201704)Open Fund of the Key Laboratory of Marine Geology and Environment,Chinese Academy of Sciences(No.MGE2020KG02).
文摘Bathymetry data are usually obtained via single-beam or multibeam sounding;however,these methods exhibit low efficiency and coverage and are dependent on various parameters,including the condition of the vessel and sea state.To overcome these limitations,we propose a method for marine bathymetry inversion based on the satellite altimetry gravity anomaly data as a modification of the gravity-geologic method(GGM),which is a conventional terrain inversion method based on gravity data.In accordance with its principle,the modified method adopts a rectangular prism model for modeling the short-wavelength gravity anomaly and the Tikhonov regularization method to integrate the geophysical constraints,including the a priori water depth data and characteristics of the sea bottom relief.The a priori water depth data can be obtained based on the measurement data obtained from a ship,borehole information,etc.,and the existing bathymetry/terrain model can be considered as the initial model.Marquardt’s method is used during the inversion process,and the regularization parameter can be adaptively determined.The model test and application to the West Philippine Basin indicate the feasibility and eff ectiveness of the proposed method.The results indicate the capability of the proposed method to improve the overall accuracy of the water depth data.Then,the proposed method can be used to conduct a preliminary study of the ocean depths.Additionally,the results show that in the improved GGM,the density diff erence parameter has lost its original physical meaning,and it will not have a great impact on the inversion process.Based on the boundedness of the study area,the inversion result may exhibit a lower confi dence level near the margin than that near the center.Furthermore,the modifi ed GGM is time-and memory-intensive when compared with the conventional GGM.
文摘This article is devoted to the regularization of nonlinear ill-posed problems with accretive operators in Banach spaces. The data involved are assumed to be known approximately. The authors concentrate their discussion on the convergence rates of regular solutions.