Multi-label feature selection(MFS)is a crucial dimensionality reduction technique aimed at identifying informative features associated with multiple labels.However,traditional centralized methods face significant chal...Multi-label feature selection(MFS)is a crucial dimensionality reduction technique aimed at identifying informative features associated with multiple labels.However,traditional centralized methods face significant challenges in privacy-sensitive and distributed settings,often neglecting label dependencies and suffering from low computational efficiency.To address these issues,we introduce a novel framework,Fed-MFSDHBCPSO—federated MFS via dual-layer hybrid breeding cooperative particle swarm optimization algorithm with manifold and sparsity regularization(DHBCPSO-MSR).Leveraging the federated learning paradigm,Fed-MFSDHBCPSO allows clients to perform local feature selection(FS)using DHBCPSO-MSR.Locally selected feature subsets are encrypted with differential privacy(DP)and transmitted to a central server,where they are securely aggregated and refined through secure multi-party computation(SMPC)until global convergence is achieved.Within each client,DHBCPSO-MSR employs a dual-layer FS strategy.The inner layer constructs sample and label similarity graphs,generates Laplacian matrices to capture the manifold structure between samples and labels,and applies L2,1-norm regularization to sparsify the feature subset,yielding an optimized feature weight matrix.The outer layer uses a hybrid breeding cooperative particle swarm optimization algorithm to further refine the feature weight matrix and identify the optimal feature subset.The updated weight matrix is then fed back to the inner layer for further optimization.Comprehensive experiments on multiple real-world multi-label datasets demonstrate that Fed-MFSDHBCPSO consistently outperforms both centralized and federated baseline methods across several key evaluation metrics.展开更多
Owing to the constraints of depth sensing technology,images acquired by depth cameras are inevitably mixed with various noises.For depth maps presented in gray values,this research proposes a novel denoising model,ter...Owing to the constraints of depth sensing technology,images acquired by depth cameras are inevitably mixed with various noises.For depth maps presented in gray values,this research proposes a novel denoising model,termed graph-based transform(GBT)and dual graph Laplacian regularization(DGLR)(DGLR-GBT).This model specifically aims to remove Gaussian white noise by capitalizing on the nonlocal self-similarity(NSS)and the piecewise smoothness properties intrinsic to depth maps.Within the group sparse coding(GSC)framework,a combination of GBT and DGLR is implemented.Firstly,within each group,the graph is constructed by using estimates of the true values of the averaged blocks instead of the observations.Secondly,the graph Laplacian regular terms are constructed based on rows and columns of similar block groups,respectively.Lastly,the solution is obtained effectively by combining the alternating direction multiplication method(ADMM)with the weighted thresholding method within the domain of GBT.展开更多
In this study,we present a deterministic convergence analysis of Gated Recurrent Unit(GRU)networks enhanced by a smoothing L_(1)regularization technique.While GRU architectures effectively mitigate gradient vanishing/...In this study,we present a deterministic convergence analysis of Gated Recurrent Unit(GRU)networks enhanced by a smoothing L_(1)regularization technique.While GRU architectures effectively mitigate gradient vanishing/exploding issues in sequential modeling,they remain prone to overfitting,particularly under noisy or limited training data.Traditional L_(1)regularization,despite enforcing sparsity and accelerating optimization,introduces non-differentiable points in the error function,leading to oscillations during training.To address this,we propose a novel smoothing L_(1)regularization framework that replaces the non-differentiable absolute function with a quadratic approximation,ensuring gradient continuity and stabilizing the optimization landscape.Theoretically,we rigorously establish threekey properties of the resulting smoothing L_(1)-regularizedGRU(SL_(1)-GRU)model:(1)monotonic decrease of the error function across iterations,(2)weak convergence characterized by vanishing gradients as iterations approach infinity,and(3)strong convergence of network weights to fixed points under finite conditions.Comprehensive experiments on benchmark datasets-spanning function approximation,classification(KDD Cup 1999 Data,MNIST),and regression tasks(Boston Housing,Energy Efficiency)-demonstrate SL_(1)-GRUs superiority over baseline models(RNN,LSTM,GRU,L_(1)-GRU,L2-GRU).Empirical results reveal that SL_(1)-GRU achieves 1.0%-2.4%higher test accuracy in classification,7.8%-15.4%lower mean squared error in regression compared to unregularized GRU,while reducing training time by 8.7%-20.1%.These outcomes validate the method’s efficacy in balancing computational efficiency and generalization capability,and they strongly corroborate the theoretical calculations.The proposed framework not only resolves the non-differentiability challenge of L_(1)regularization but also provides a theoretical foundation for convergence guarantees in recurrent neural network training.展开更多
Absorption compensation is a process involving the exponential amplification of reflection amplitudes.This process amplifies the seismic signal and noise,thereby substantially reducing the signal-tonoise ratio of seis...Absorption compensation is a process involving the exponential amplification of reflection amplitudes.This process amplifies the seismic signal and noise,thereby substantially reducing the signal-tonoise ratio of seismic data.Therefore,this paper proposes a multichannel inversion absorption compensation method based on structure tensor regularization.First,the structure tensor is utilized to extract the spatial inclination of seismic signals,and the spatial prediction filter is designed along the inclination direction.The spatial prediction filter is then introduced into the regularization condition of multichannel inversion absorption compensation,and the absorption compensation is realized under the framework of multichannel inversion theory.The spatial predictability of seismic signals is also introduced into the objective function of absorption compensation inversion.Thus,the inversion system can effectively suppress the noise amplification effect during absorption compensation and improve the recovery accuracy of high-frequency signals.Synthetic and field data tests are conducted to demonstrate the accuracy and effectiveness of the proposed method.展开更多
Energy resolution calibration is crucial for gamma-ray spectral analysis,as measured using a scintillation detector.A locally constrained regularization method was proposed to determine the resolution calibration para...Energy resolution calibration is crucial for gamma-ray spectral analysis,as measured using a scintillation detector.A locally constrained regularization method was proposed to determine the resolution calibration parameters.First,a Monte Carlo simulation model consistent with an actual measurement system was constructed to obtain the energy deposition distribution in the scintillation crystal.Subsequently,the regularization objective function is established based on weighted least squares and additional constraints.Additional constraints were designed using a special weighting scheme based on the incident gamma-ray energies.Subsequently,an intelligent algorithm was introduced to search for the optimal resolution calibration parameters by minimizing the objective function.The most appropriate regularization parameter was determined through mathematical experiments.When the regularization parameter was 30,the calibrated results exhibited the minimum RMSE.Simulations and test pit experiments were conducted to verify the performance of the proposed method.The simulation results demonstrate that the proposed algorithm can determine resolution calibration parameters more accurately than the traditional weighted least squares,and the test pit experimental results show that the R-squares between the calibrated and measured spectra are larger than 0.99.The accurate resolution calibration parameters determined by the proposed method lay the foundation for gamma-ray spectral processing and simulation benchmarking.展开更多
Target tracking is an essential task in contemporary computer vision applications.However,its effectiveness is susceptible to model drift,due to the different appearances of targets,which often compromises tracking ro...Target tracking is an essential task in contemporary computer vision applications.However,its effectiveness is susceptible to model drift,due to the different appearances of targets,which often compromises tracking robustness and precision.In this paper,a universally applicable method based on correlation filters is introduced to mitigate model drift in complex scenarios.It employs temporal-confidence samples as a priori to guide the model update process and ensure its precision and consistency over a long period.An improved update mechanism based on the peak side-lobe to peak correlation energy(PSPCE)criterion is proposed,which selects high-confidence samples along the temporal dimension to update temporal-confidence samples.Extensive experiments on various benchmarks demonstrate that the proposed method achieves a competitive performance compared with the state-of-the-art methods.Especially when the target appearance changes significantly,our method is more robust and can achieve a balance between precision and speed.Specifically,on the object tracking benchmark(OTB-100)dataset,compared to the baseline,the tracking precision of our model improves by 8.8%,8.8%,5.1%,5.6%,and 6.9%for background clutter,deformation,occlusion,rotation,and illumination variation,respectively.The results indicate that this proposed method can significantly enhance the robustness and precision of target tracking in dynamic and challenging environments,offering a reliable solution for applications such as real-time monitoring,autonomous driving,and precision guidance.展开更多
Full waveform inversion is a precise method for parameter inversion,harnessing the complete wavefield information of seismic waves.It holds the potential to intricately characterize the detailed features of the model ...Full waveform inversion is a precise method for parameter inversion,harnessing the complete wavefield information of seismic waves.It holds the potential to intricately characterize the detailed features of the model with high accuracy.However,due to inaccurate initial models,the absence of low-frequency data,and incomplete observational data,full waveform inversion(FWI)exhibits pronounced nonlinear characteristics.When the strata are buried deep,the inversion capability of this method is constrained.To enhance the accuracy and precision of FWI,this paper introduces a novel approach to address the aforementioned challenges—namely,a fractional-order anisotropic total p-variation regularization for full waveform inversion(FATpV-FWI).This method incorporates fractional-order total variation(TV)regularization to construct the inversion objective function,building upon TV regularization,and subsequently employs the alternating direction multiplier method for solving.This approach mitigates the step effect stemming from total variation in seismic inversion,thereby facilitating the reconstruction of sharp interfaces of geophysical parameters while smoothing background variations.Simultaneously,replacing integer-order differences with fractional-order differences bolsters the correlation among seismic data and diminishes the scattering effect caused by integer-order differences in seismic inversion.The outcomes of model tests validate the efficacy of this method,highlighting its ability to enhance the overall accuracy of the inversion process.展开更多
Modern warfare demands weapons capable of penetrating substantial structures,which presents sig-nificant challenges to the reliability of the electronic devices that are crucial to the weapon's perfor-mance.Due to...Modern warfare demands weapons capable of penetrating substantial structures,which presents sig-nificant challenges to the reliability of the electronic devices that are crucial to the weapon's perfor-mance.Due to miniaturization of electronic components,it is challenging to directly measure or numerically predict the mechanical response of small-sized critical interconnections in board-level packaging structures to ensure the mechanical reliability of electronic devices in projectiles under harsh working conditions.To address this issue,an indirect measurement method using the Bayesian regularization-based load identification was proposed in this study based on finite element(FE)pre-dictions to estimate the load applied on critical interconnections of board-level packaging structures during the process of projectile penetration.For predicting the high-strain-rate penetration process,an FE model was established with elasto-plastic constitutive models of the representative packaging ma-terials(that is,solder material and epoxy molding compound)in which material constitutive parameters were calibrated against the experimental results by using the split-Hopkinson pressure bar.As the impact-induced dynamic bending of the printed circuit board resulted in an alternating tensile-compressive loading on the solder joints during penetration,the corner solder joints in the edge re-gions experience the highest S11 and strain,making them more prone to failure.Based on FE predictions at different structural scales,an improved Bayesian method based on augmented Tikhonov regulariza-tion was theoretically proposed to address the issues of ill-posed matrix inversion and noise sensitivity in the load identification at the critical solder joints.By incorporating a wavelet thresholding technique,the method resolves the problem of poor load identification accuracy at high noise levels.The proposed method achieves satisfactorily small relative errors and high correlation coefficients in identifying the mechanical response of local interconnections in board-level packaging structures,while significantly balancing the smoothness of response curves with the accuracy of peak identification.At medium and low noise levels,the relative error is less than 6%,while it is less than 10%at high noise levels.The proposed method provides an effective indirect approach for the boundary conditions of localized solder joints during the projectile penetration process,and its philosophy can be readily extended to other scenarios of multiscale analysis for highly nonlinear materials and structures under extreme loading conditions.展开更多
We use the extrapolated Tikhonov regularization to deal with the ill-posed problem of 3D density inversion of gravity gradient data. The use of regularization parameters in the proposed method reduces the deviations b...We use the extrapolated Tikhonov regularization to deal with the ill-posed problem of 3D density inversion of gravity gradient data. The use of regularization parameters in the proposed method reduces the deviations between calculated and observed data. We also use the depth weighting function based on the eigenvector of gravity gradient tensor to eliminate undesired effects owing to the fast attenuation of the position function. Model data suggest that the extrapolated Tikhonov regularization in conjunction with the depth weighting function can effectively recover the 3D distribution of density anomalies. We conduct density inversion of gravity gradient data from the Australia Kauring test site and compare the inversion results with the published research results. The proposed inversion method can be used to obtain the 3D density distribution of underground anomalies.展开更多
In order to decrease the sensitivity of the constant scale parameter, adaptively optimize the scale parameter in the iteration regularization model (IRM) and attain a desirable level of applicability for image denoi...In order to decrease the sensitivity of the constant scale parameter, adaptively optimize the scale parameter in the iteration regularization model (IRM) and attain a desirable level of applicability for image denoising, a novel IRM with the adaptive scale parameter is proposed. First, the classic regularization item is modified and the equation of the adaptive scale parameter is deduced. Then, the initial value of the varying scale parameter is obtained by the trend of the number of iterations and the scale parameter sequence vectors. Finally, the novel iterative regularization method is used for image denoising. Numerical experiments show that compared with the IRM with the constant scale parameter, the proposed method with the varying scale parameter can not only reduce the number of iterations when the scale parameter becomes smaller, but also efficiently remove noise when the scale parameter becomes bigger and well preserve the details of images.展开更多
Studies of wave-current interactions are vital for the safe design of structures.Regular waves in the presence of uniform,linear shear,and quadratic shear currents are explored by the High-Level Green-Naghdi model in ...Studies of wave-current interactions are vital for the safe design of structures.Regular waves in the presence of uniform,linear shear,and quadratic shear currents are explored by the High-Level Green-Naghdi model in this paper.The five-point central difference method is used for spatial discretization,and the fourth-order Adams predictor-corrector scheme is employed for marching in time.The domain-decomposition method is applied for the wave-current generation and absorption.The effects of currents on the wave profile and velocity field are examined under two conditions:the same velocity of currents at the still-water level and the constant flow volume of currents.Wave profiles and velocity fields demonstrate substantial differences in three types of currents owing to the diverse vertical distribution of current velocity and vorticity.Then,loads on small-scale vertical cylinders subjected to regular waves and three types of background currents with the same flow volume are investigated.The maximum load intensity and load fluctuation amplitude in uniform,linear shear,and quadratic shear currents increase sequentially.The stretched superposition method overestimates the maximum load intensity and load fluctuation amplitude in opposing currents and underestimates these values in following currents.The stretched superposition method obtains a poor approximation for strong nonlinear waves,particularly in the case of the opposing quadratic shear current.展开更多
文摘Multi-label feature selection(MFS)is a crucial dimensionality reduction technique aimed at identifying informative features associated with multiple labels.However,traditional centralized methods face significant challenges in privacy-sensitive and distributed settings,often neglecting label dependencies and suffering from low computational efficiency.To address these issues,we introduce a novel framework,Fed-MFSDHBCPSO—federated MFS via dual-layer hybrid breeding cooperative particle swarm optimization algorithm with manifold and sparsity regularization(DHBCPSO-MSR).Leveraging the federated learning paradigm,Fed-MFSDHBCPSO allows clients to perform local feature selection(FS)using DHBCPSO-MSR.Locally selected feature subsets are encrypted with differential privacy(DP)and transmitted to a central server,where they are securely aggregated and refined through secure multi-party computation(SMPC)until global convergence is achieved.Within each client,DHBCPSO-MSR employs a dual-layer FS strategy.The inner layer constructs sample and label similarity graphs,generates Laplacian matrices to capture the manifold structure between samples and labels,and applies L2,1-norm regularization to sparsify the feature subset,yielding an optimized feature weight matrix.The outer layer uses a hybrid breeding cooperative particle swarm optimization algorithm to further refine the feature weight matrix and identify the optimal feature subset.The updated weight matrix is then fed back to the inner layer for further optimization.Comprehensive experiments on multiple real-world multi-label datasets demonstrate that Fed-MFSDHBCPSO consistently outperforms both centralized and federated baseline methods across several key evaluation metrics.
基金National Natural Science Foundation of China(No.62372100)。
文摘Owing to the constraints of depth sensing technology,images acquired by depth cameras are inevitably mixed with various noises.For depth maps presented in gray values,this research proposes a novel denoising model,termed graph-based transform(GBT)and dual graph Laplacian regularization(DGLR)(DGLR-GBT).This model specifically aims to remove Gaussian white noise by capitalizing on the nonlocal self-similarity(NSS)and the piecewise smoothness properties intrinsic to depth maps.Within the group sparse coding(GSC)framework,a combination of GBT and DGLR is implemented.Firstly,within each group,the graph is constructed by using estimates of the true values of the averaged blocks instead of the observations.Secondly,the graph Laplacian regular terms are constructed based on rows and columns of similar block groups,respectively.Lastly,the solution is obtained effectively by combining the alternating direction multiplication method(ADMM)with the weighted thresholding method within the domain of GBT.
基金supported by the National Science Fund for Distinguished Young Scholarship(No.62025602)National Natural Science Foundation of China(Nos.U22B2036,11931015)+2 种基金the Fok Ying-Tong Education Foundation China(No.171105)the Fundamental Research Funds for the Central Universities(No.G2024WD0151)in part by the Tencent Foundation and XPLORER PRIZE.
文摘In this study,we present a deterministic convergence analysis of Gated Recurrent Unit(GRU)networks enhanced by a smoothing L_(1)regularization technique.While GRU architectures effectively mitigate gradient vanishing/exploding issues in sequential modeling,they remain prone to overfitting,particularly under noisy or limited training data.Traditional L_(1)regularization,despite enforcing sparsity and accelerating optimization,introduces non-differentiable points in the error function,leading to oscillations during training.To address this,we propose a novel smoothing L_(1)regularization framework that replaces the non-differentiable absolute function with a quadratic approximation,ensuring gradient continuity and stabilizing the optimization landscape.Theoretically,we rigorously establish threekey properties of the resulting smoothing L_(1)-regularizedGRU(SL_(1)-GRU)model:(1)monotonic decrease of the error function across iterations,(2)weak convergence characterized by vanishing gradients as iterations approach infinity,and(3)strong convergence of network weights to fixed points under finite conditions.Comprehensive experiments on benchmark datasets-spanning function approximation,classification(KDD Cup 1999 Data,MNIST),and regression tasks(Boston Housing,Energy Efficiency)-demonstrate SL_(1)-GRUs superiority over baseline models(RNN,LSTM,GRU,L_(1)-GRU,L2-GRU).Empirical results reveal that SL_(1)-GRU achieves 1.0%-2.4%higher test accuracy in classification,7.8%-15.4%lower mean squared error in regression compared to unregularized GRU,while reducing training time by 8.7%-20.1%.These outcomes validate the method’s efficacy in balancing computational efficiency and generalization capability,and they strongly corroborate the theoretical calculations.The proposed framework not only resolves the non-differentiability challenge of L_(1)regularization but also provides a theoretical foundation for convergence guarantees in recurrent neural network training.
基金funded by the National Key R&D Program of China(Grant no.2018YFA0702504)the Sinopec research project(P22162).
文摘Absorption compensation is a process involving the exponential amplification of reflection amplitudes.This process amplifies the seismic signal and noise,thereby substantially reducing the signal-tonoise ratio of seismic data.Therefore,this paper proposes a multichannel inversion absorption compensation method based on structure tensor regularization.First,the structure tensor is utilized to extract the spatial inclination of seismic signals,and the spatial prediction filter is designed along the inclination direction.The spatial prediction filter is then introduced into the regularization condition of multichannel inversion absorption compensation,and the absorption compensation is realized under the framework of multichannel inversion theory.The spatial predictability of seismic signals is also introduced into the objective function of absorption compensation inversion.Thus,the inversion system can effectively suppress the noise amplification effect during absorption compensation and improve the recovery accuracy of high-frequency signals.Synthetic and field data tests are conducted to demonstrate the accuracy and effectiveness of the proposed method.
基金supported by the National Natural Science Foundation of China(No.41804141)。
文摘Energy resolution calibration is crucial for gamma-ray spectral analysis,as measured using a scintillation detector.A locally constrained regularization method was proposed to determine the resolution calibration parameters.First,a Monte Carlo simulation model consistent with an actual measurement system was constructed to obtain the energy deposition distribution in the scintillation crystal.Subsequently,the regularization objective function is established based on weighted least squares and additional constraints.Additional constraints were designed using a special weighting scheme based on the incident gamma-ray energies.Subsequently,an intelligent algorithm was introduced to search for the optimal resolution calibration parameters by minimizing the objective function.The most appropriate regularization parameter was determined through mathematical experiments.When the regularization parameter was 30,the calibrated results exhibited the minimum RMSE.Simulations and test pit experiments were conducted to verify the performance of the proposed method.The simulation results demonstrate that the proposed algorithm can determine resolution calibration parameters more accurately than the traditional weighted least squares,and the test pit experimental results show that the R-squares between the calibrated and measured spectra are larger than 0.99.The accurate resolution calibration parameters determined by the proposed method lay the foundation for gamma-ray spectral processing and simulation benchmarking.
基金supported by the Natural Science Foundation of Sichuan Province of China under Grant No.2025ZNSFSC0522partially supported by the National Natural Science Foundation of China under Grants No.61775030 and No.61571096.
文摘Target tracking is an essential task in contemporary computer vision applications.However,its effectiveness is susceptible to model drift,due to the different appearances of targets,which often compromises tracking robustness and precision.In this paper,a universally applicable method based on correlation filters is introduced to mitigate model drift in complex scenarios.It employs temporal-confidence samples as a priori to guide the model update process and ensure its precision and consistency over a long period.An improved update mechanism based on the peak side-lobe to peak correlation energy(PSPCE)criterion is proposed,which selects high-confidence samples along the temporal dimension to update temporal-confidence samples.Extensive experiments on various benchmarks demonstrate that the proposed method achieves a competitive performance compared with the state-of-the-art methods.Especially when the target appearance changes significantly,our method is more robust and can achieve a balance between precision and speed.Specifically,on the object tracking benchmark(OTB-100)dataset,compared to the baseline,the tracking precision of our model improves by 8.8%,8.8%,5.1%,5.6%,and 6.9%for background clutter,deformation,occlusion,rotation,and illumination variation,respectively.The results indicate that this proposed method can significantly enhance the robustness and precision of target tracking in dynamic and challenging environments,offering a reliable solution for applications such as real-time monitoring,autonomous driving,and precision guidance.
基金supported by the China Postdoctoral Science Foundation(Grant No.2024MF750281)the Postdoctoral Fellowship Program of CPSF(Grant No.GZC20230326)+1 种基金the Natural Science Foundation Project of Sichuan Province(Grant No.2025ZNSFSC1170)Sichuan Science and Technology Program(Grant No.2023ZYD0158).
文摘Full waveform inversion is a precise method for parameter inversion,harnessing the complete wavefield information of seismic waves.It holds the potential to intricately characterize the detailed features of the model with high accuracy.However,due to inaccurate initial models,the absence of low-frequency data,and incomplete observational data,full waveform inversion(FWI)exhibits pronounced nonlinear characteristics.When the strata are buried deep,the inversion capability of this method is constrained.To enhance the accuracy and precision of FWI,this paper introduces a novel approach to address the aforementioned challenges—namely,a fractional-order anisotropic total p-variation regularization for full waveform inversion(FATpV-FWI).This method incorporates fractional-order total variation(TV)regularization to construct the inversion objective function,building upon TV regularization,and subsequently employs the alternating direction multiplier method for solving.This approach mitigates the step effect stemming from total variation in seismic inversion,thereby facilitating the reconstruction of sharp interfaces of geophysical parameters while smoothing background variations.Simultaneously,replacing integer-order differences with fractional-order differences bolsters the correlation among seismic data and diminishes the scattering effect caused by integer-order differences in seismic inversion.The outcomes of model tests validate the efficacy of this method,highlighting its ability to enhance the overall accuracy of the inversion process.
基金supported by the National Natural Science Foundation of China(Grant Nos.52475166,52175148)the Regional Collaboration Project of Shanxi Province(Grant No.202204041101044).
文摘Modern warfare demands weapons capable of penetrating substantial structures,which presents sig-nificant challenges to the reliability of the electronic devices that are crucial to the weapon's perfor-mance.Due to miniaturization of electronic components,it is challenging to directly measure or numerically predict the mechanical response of small-sized critical interconnections in board-level packaging structures to ensure the mechanical reliability of electronic devices in projectiles under harsh working conditions.To address this issue,an indirect measurement method using the Bayesian regularization-based load identification was proposed in this study based on finite element(FE)pre-dictions to estimate the load applied on critical interconnections of board-level packaging structures during the process of projectile penetration.For predicting the high-strain-rate penetration process,an FE model was established with elasto-plastic constitutive models of the representative packaging ma-terials(that is,solder material and epoxy molding compound)in which material constitutive parameters were calibrated against the experimental results by using the split-Hopkinson pressure bar.As the impact-induced dynamic bending of the printed circuit board resulted in an alternating tensile-compressive loading on the solder joints during penetration,the corner solder joints in the edge re-gions experience the highest S11 and strain,making them more prone to failure.Based on FE predictions at different structural scales,an improved Bayesian method based on augmented Tikhonov regulariza-tion was theoretically proposed to address the issues of ill-posed matrix inversion and noise sensitivity in the load identification at the critical solder joints.By incorporating a wavelet thresholding technique,the method resolves the problem of poor load identification accuracy at high noise levels.The proposed method achieves satisfactorily small relative errors and high correlation coefficients in identifying the mechanical response of local interconnections in board-level packaging structures,while significantly balancing the smoothness of response curves with the accuracy of peak identification.At medium and low noise levels,the relative error is less than 6%,while it is less than 10%at high noise levels.The proposed method provides an effective indirect approach for the boundary conditions of localized solder joints during the projectile penetration process,and its philosophy can be readily extended to other scenarios of multiscale analysis for highly nonlinear materials and structures under extreme loading conditions.
基金supported by National major special equipment development(No.2011YQ120045)The National Natural Science Fund(No.41074050 and 41304023)
文摘We use the extrapolated Tikhonov regularization to deal with the ill-posed problem of 3D density inversion of gravity gradient data. The use of regularization parameters in the proposed method reduces the deviations between calculated and observed data. We also use the depth weighting function based on the eigenvector of gravity gradient tensor to eliminate undesired effects owing to the fast attenuation of the position function. Model data suggest that the extrapolated Tikhonov regularization in conjunction with the depth weighting function can effectively recover the 3D distribution of density anomalies. We conduct density inversion of gravity gradient data from the Australia Kauring test site and compare the inversion results with the published research results. The proposed inversion method can be used to obtain the 3D density distribution of underground anomalies.
基金The National Natural Science Foundation of China(No.60702069)the Research Project of Department of Education of Zhe-jiang Province (No.20060601)+1 种基金the Natural Science Foundation of Zhe-jiang Province (No.Y1080851)Shanghai International Cooperation onRegion of France (No.06SR07109)
文摘In order to decrease the sensitivity of the constant scale parameter, adaptively optimize the scale parameter in the iteration regularization model (IRM) and attain a desirable level of applicability for image denoising, a novel IRM with the adaptive scale parameter is proposed. First, the classic regularization item is modified and the equation of the adaptive scale parameter is deduced. Then, the initial value of the varying scale parameter is obtained by the trend of the number of iterations and the scale parameter sequence vectors. Finally, the novel iterative regularization method is used for image denoising. Numerical experiments show that compared with the IRM with the constant scale parameter, the proposed method with the varying scale parameter can not only reduce the number of iterations when the scale parameter becomes smaller, but also efficiently remove noise when the scale parameter becomes bigger and well preserve the details of images.
基金Supported by the Development and Application Project of Ship CAE Software.
文摘Studies of wave-current interactions are vital for the safe design of structures.Regular waves in the presence of uniform,linear shear,and quadratic shear currents are explored by the High-Level Green-Naghdi model in this paper.The five-point central difference method is used for spatial discretization,and the fourth-order Adams predictor-corrector scheme is employed for marching in time.The domain-decomposition method is applied for the wave-current generation and absorption.The effects of currents on the wave profile and velocity field are examined under two conditions:the same velocity of currents at the still-water level and the constant flow volume of currents.Wave profiles and velocity fields demonstrate substantial differences in three types of currents owing to the diverse vertical distribution of current velocity and vorticity.Then,loads on small-scale vertical cylinders subjected to regular waves and three types of background currents with the same flow volume are investigated.The maximum load intensity and load fluctuation amplitude in uniform,linear shear,and quadratic shear currents increase sequentially.The stretched superposition method overestimates the maximum load intensity and load fluctuation amplitude in opposing currents and underestimates these values in following currents.The stretched superposition method obtains a poor approximation for strong nonlinear waves,particularly in the case of the opposing quadratic shear current.