Multi-label feature selection(MFS)is a crucial dimensionality reduction technique aimed at identifying informative features associated with multiple labels.However,traditional centralized methods face significant chal...Multi-label feature selection(MFS)is a crucial dimensionality reduction technique aimed at identifying informative features associated with multiple labels.However,traditional centralized methods face significant challenges in privacy-sensitive and distributed settings,often neglecting label dependencies and suffering from low computational efficiency.To address these issues,we introduce a novel framework,Fed-MFSDHBCPSO—federated MFS via dual-layer hybrid breeding cooperative particle swarm optimization algorithm with manifold and sparsity regularization(DHBCPSO-MSR).Leveraging the federated learning paradigm,Fed-MFSDHBCPSO allows clients to perform local feature selection(FS)using DHBCPSO-MSR.Locally selected feature subsets are encrypted with differential privacy(DP)and transmitted to a central server,where they are securely aggregated and refined through secure multi-party computation(SMPC)until global convergence is achieved.Within each client,DHBCPSO-MSR employs a dual-layer FS strategy.The inner layer constructs sample and label similarity graphs,generates Laplacian matrices to capture the manifold structure between samples and labels,and applies L2,1-norm regularization to sparsify the feature subset,yielding an optimized feature weight matrix.The outer layer uses a hybrid breeding cooperative particle swarm optimization algorithm to further refine the feature weight matrix and identify the optimal feature subset.The updated weight matrix is then fed back to the inner layer for further optimization.Comprehensive experiments on multiple real-world multi-label datasets demonstrate that Fed-MFSDHBCPSO consistently outperforms both centralized and federated baseline methods across several key evaluation metrics.展开更多
Unraveling the essence of electronic structure effected by d-d orbital coupling of transition metal and methanol oxidation reaction(MOR)performance can fundamentally guide high efficient catalyst design.Herein,density...Unraveling the essence of electronic structure effected by d-d orbital coupling of transition metal and methanol oxidation reaction(MOR)performance can fundamentally guide high efficient catalyst design.Herein,density functional theory(DFT)calculations were performed at first to study the d–d orbital interaction of metallic Pt Pd Cu,revealing that the incorporation of Pd and Cu atoms into Pt system can enhance d-d electron interaction via capturing antibonding orbital electrons of Pt to fill the surrounding Pd and Cu atoms.Under the theoretical guidance,Pt Pd Cu medium entropy alloy aerogels(Pt Pd Cu MEAAs)catalysts have been designed and systematically screened for MOR under acid,alkaline and neutral electrolyte.Furthermore,DFT calculation and in-situ fourier transform infrared spectroscopy analysis indicate that Pt Pd Cu MEAAs follow the direct pathway via formate as the reactive intermediate to be directly oxidized to CO_(2).For practical direct methanol fuel cells(DMFCs),the Pt Pd Cu MEAAs-integrated ultra-thin catalyst layer(4–5μm thickness)as anode exhibits higher peak power density of 35 m W/cm^(2) than commercial Pt/C of 20 m W/cm^(2)(~40μm thickness)under the similar noble metal loading and an impressive stability retention at a 50-m A/cm^(2) constant current for 10 h.This work clearly proves that optimizing the intermediate adsorption capacity via d-d orbital coupling is an effective strategy to design highly efficient catalysts for DMFCs.展开更多
The finite volume method was applied to numerically simulate the bottom pressure field induced by regular waves,vehicles in calm water and vehicles in regular waves.The solution of Navier-Stokes(N-S)equations in the v...The finite volume method was applied to numerically simulate the bottom pressure field induced by regular waves,vehicles in calm water and vehicles in regular waves.The solution of Navier-Stokes(N-S)equations in the vicinity of numerical wave tank's boundary was forced towards the wave theoretical solution by incorporating momentum source terms,thereby reducing adverse effects such as wave reflection.Simulations utilizing laminar flow,turbulent flow,and ideal fluid models were all found capable of effectively capturing the waveform and bottom pressure of regular waves,agreeing well with experimental data.In predicting the bottom pressure field of the submerged vehicle,turbulent simulations considering fluid viscosity and boundary layer development provided more accurate predictions for the stern region than inviscid simulations.Due to sphere's diffractive effect,the sphere's bottom pressure field in waves is not a linear superposition of the wave's and the sphere's bottom pressure field.However,a slender submerged vehicle exhibits a weaker diffractive effect on waves,thus the submerged vehicle's bottom pressure field in waves can be approximated as a linear superposition of the wave's and the submerged vehicle's bottom pressure field,which simplifies computation and analysis.展开更多
This paper delves into the baseline design under the baseline parameterization model in experimental design, focusing on the relationship between the K-aberration criterion and the word length pattern (WLP) of regular...This paper delves into the baseline design under the baseline parameterization model in experimental design, focusing on the relationship between the K-aberration criterion and the word length pattern (WLP) of regular two-level designs. The paper provides a detailed analysis of the relationship between K5and the WLP for regular two-level designs with resolution t=3, and proposes corresponding theoretical results. These results not only theoretically reveal the connection between the orthogonal parameterization model and the baseline parameterization model but also provide theoretical support for finding the K-aberration optimal regular two-level baseline designs. It demonstrates how to apply these theories to evaluate and select the optimal experimental designs. In practical applications, experimental designers can utilize the theoretical results of this paper to quickly assess and select regular two-level baseline designs with minimal K-aberration by analyzing the WLP of the experimental design. This allows for the identification of key factors that significantly affect the experimental outcomes without frequently changing the factor levels, thereby maximizing the benefits of the experiment.展开更多
Toric patch is a kind of rational multisided patch,which is associated with a finite integer lattice points set A.A set of weights is defined which depend on a parameter according to regular decomposition of A.When al...Toric patch is a kind of rational multisided patch,which is associated with a finite integer lattice points set A.A set of weights is defined which depend on a parameter according to regular decomposition of A.When all weights of the patch tend to infinity,we obtain the limiting form of toric patch which is called its regular control surface.The diferent weights may induce the diferent regular control surfaces of the same toric patch.It prompts us to consider that how many regular control surfaces of a toric patch.In this paper,we study the regular decompositions of A by using integer programming method firstly,and then provide the relationship between all regular decompositions of A and corresponding state polytope.Moreover,we present that the number of regular control surfaces of a toric patch associated with A is equal to the number of regular decompositions of A.An algorithm to calculate the number of regular control surfaces of toric patch is provided.The algorithm also presents a method to construct all of the regular control surfaces of a toric patch.At last,the application of proposed result in shape deformation is demonstrated by several examples.展开更多
In this study,we present a deterministic convergence analysis of Gated Recurrent Unit(GRU)networks enhanced by a smoothing L_(1)regularization technique.While GRU architectures effectively mitigate gradient vanishing/...In this study,we present a deterministic convergence analysis of Gated Recurrent Unit(GRU)networks enhanced by a smoothing L_(1)regularization technique.While GRU architectures effectively mitigate gradient vanishing/exploding issues in sequential modeling,they remain prone to overfitting,particularly under noisy or limited training data.Traditional L_(1)regularization,despite enforcing sparsity and accelerating optimization,introduces non-differentiable points in the error function,leading to oscillations during training.To address this,we propose a novel smoothing L_(1)regularization framework that replaces the non-differentiable absolute function with a quadratic approximation,ensuring gradient continuity and stabilizing the optimization landscape.Theoretically,we rigorously establish threekey properties of the resulting smoothing L_(1)-regularizedGRU(SL_(1)-GRU)model:(1)monotonic decrease of the error function across iterations,(2)weak convergence characterized by vanishing gradients as iterations approach infinity,and(3)strong convergence of network weights to fixed points under finite conditions.Comprehensive experiments on benchmark datasets-spanning function approximation,classification(KDD Cup 1999 Data,MNIST),and regression tasks(Boston Housing,Energy Efficiency)-demonstrate SL_(1)-GRUs superiority over baseline models(RNN,LSTM,GRU,L_(1)-GRU,L2-GRU).Empirical results reveal that SL_(1)-GRU achieves 1.0%-2.4%higher test accuracy in classification,7.8%-15.4%lower mean squared error in regression compared to unregularized GRU,while reducing training time by 8.7%-20.1%.These outcomes validate the method’s efficacy in balancing computational efficiency and generalization capability,and they strongly corroborate the theoretical calculations.The proposed framework not only resolves the non-differentiability challenge of L_(1)regularization but also provides a theoretical foundation for convergence guarantees in recurrent neural network training.展开更多
We use the Schrödinger–Newton equation to calculate the regularized self-energy of a particle using a regular self-gravitational and electrostatic potential derived in string T-duality.The particle mass M is no ...We use the Schrödinger–Newton equation to calculate the regularized self-energy of a particle using a regular self-gravitational and electrostatic potential derived in string T-duality.The particle mass M is no longer concentrated into a point but is diluted and described by a quantum-corrected smeared energy density resulting in corrections to the energy of the particle,which is interpreted as a regularized self-energy.We extend our results and find corrections to the relativistic particles using the Klein–Gordon,Proca and Dirac equations.An important finding is that we extract a form of the generalized uncertainty principle(GUP)from the corrected energy.This form of the GUP is shown to depend on the nature of particles;namely,for bosons(spin 0 and spin 1)we obtain a quadratic form of the GUP,while for fermions(spin 1/2)we obtain a linear form.The correlation we find between spin and GUP may offer insights for investigating quantum gravity.展开更多
Absorption compensation is a process involving the exponential amplification of reflection amplitudes.This process amplifies the seismic signal and noise,thereby substantially reducing the signal-tonoise ratio of seis...Absorption compensation is a process involving the exponential amplification of reflection amplitudes.This process amplifies the seismic signal and noise,thereby substantially reducing the signal-tonoise ratio of seismic data.Therefore,this paper proposes a multichannel inversion absorption compensation method based on structure tensor regularization.First,the structure tensor is utilized to extract the spatial inclination of seismic signals,and the spatial prediction filter is designed along the inclination direction.The spatial prediction filter is then introduced into the regularization condition of multichannel inversion absorption compensation,and the absorption compensation is realized under the framework of multichannel inversion theory.The spatial predictability of seismic signals is also introduced into the objective function of absorption compensation inversion.Thus,the inversion system can effectively suppress the noise amplification effect during absorption compensation and improve the recovery accuracy of high-frequency signals.Synthetic and field data tests are conducted to demonstrate the accuracy and effectiveness of the proposed method.展开更多
Energy resolution calibration is crucial for gamma-ray spectral analysis,as measured using a scintillation detector.A locally constrained regularization method was proposed to determine the resolution calibration para...Energy resolution calibration is crucial for gamma-ray spectral analysis,as measured using a scintillation detector.A locally constrained regularization method was proposed to determine the resolution calibration parameters.First,a Monte Carlo simulation model consistent with an actual measurement system was constructed to obtain the energy deposition distribution in the scintillation crystal.Subsequently,the regularization objective function is established based on weighted least squares and additional constraints.Additional constraints were designed using a special weighting scheme based on the incident gamma-ray energies.Subsequently,an intelligent algorithm was introduced to search for the optimal resolution calibration parameters by minimizing the objective function.The most appropriate regularization parameter was determined through mathematical experiments.When the regularization parameter was 30,the calibrated results exhibited the minimum RMSE.Simulations and test pit experiments were conducted to verify the performance of the proposed method.The simulation results demonstrate that the proposed algorithm can determine resolution calibration parameters more accurately than the traditional weighted least squares,and the test pit experimental results show that the R-squares between the calibrated and measured spectra are larger than 0.99.The accurate resolution calibration parameters determined by the proposed method lay the foundation for gamma-ray spectral processing and simulation benchmarking.展开更多
Target tracking is an essential task in contemporary computer vision applications.However,its effectiveness is susceptible to model drift,due to the different appearances of targets,which often compromises tracking ro...Target tracking is an essential task in contemporary computer vision applications.However,its effectiveness is susceptible to model drift,due to the different appearances of targets,which often compromises tracking robustness and precision.In this paper,a universally applicable method based on correlation filters is introduced to mitigate model drift in complex scenarios.It employs temporal-confidence samples as a priori to guide the model update process and ensure its precision and consistency over a long period.An improved update mechanism based on the peak side-lobe to peak correlation energy(PSPCE)criterion is proposed,which selects high-confidence samples along the temporal dimension to update temporal-confidence samples.Extensive experiments on various benchmarks demonstrate that the proposed method achieves a competitive performance compared with the state-of-the-art methods.Especially when the target appearance changes significantly,our method is more robust and can achieve a balance between precision and speed.Specifically,on the object tracking benchmark(OTB-100)dataset,compared to the baseline,the tracking precision of our model improves by 8.8%,8.8%,5.1%,5.6%,and 6.9%for background clutter,deformation,occlusion,rotation,and illumination variation,respectively.The results indicate that this proposed method can significantly enhance the robustness and precision of target tracking in dynamic and challenging environments,offering a reliable solution for applications such as real-time monitoring,autonomous driving,and precision guidance.展开更多
文摘Multi-label feature selection(MFS)is a crucial dimensionality reduction technique aimed at identifying informative features associated with multiple labels.However,traditional centralized methods face significant challenges in privacy-sensitive and distributed settings,often neglecting label dependencies and suffering from low computational efficiency.To address these issues,we introduce a novel framework,Fed-MFSDHBCPSO—federated MFS via dual-layer hybrid breeding cooperative particle swarm optimization algorithm with manifold and sparsity regularization(DHBCPSO-MSR).Leveraging the federated learning paradigm,Fed-MFSDHBCPSO allows clients to perform local feature selection(FS)using DHBCPSO-MSR.Locally selected feature subsets are encrypted with differential privacy(DP)and transmitted to a central server,where they are securely aggregated and refined through secure multi-party computation(SMPC)until global convergence is achieved.Within each client,DHBCPSO-MSR employs a dual-layer FS strategy.The inner layer constructs sample and label similarity graphs,generates Laplacian matrices to capture the manifold structure between samples and labels,and applies L2,1-norm regularization to sparsify the feature subset,yielding an optimized feature weight matrix.The outer layer uses a hybrid breeding cooperative particle swarm optimization algorithm to further refine the feature weight matrix and identify the optimal feature subset.The updated weight matrix is then fed back to the inner layer for further optimization.Comprehensive experiments on multiple real-world multi-label datasets demonstrate that Fed-MFSDHBCPSO consistently outperforms both centralized and federated baseline methods across several key evaluation metrics.
基金financially supported by the National Natural Science Foundation of China(Nos.52073214 and 22075211)Guangxi Natural Science Fund for Distinguished Young Scholars(No.2024GXNSFFA010008)+5 种基金Natural Science Foundation of Shandong Province(Nos.ZR2023MB049 and ZR2021QB129)China Postdoctoral Science Foundation(No.2020M670483)Science Foundation of Weifang University(No.2023BS11)supported by the open research fund of the Laboratory of Xinjiang Native Medicinal and Edible Plant Resources Chemistry at Kashi Universitysupported by the Tianhe Qingsuo Open Research Fund of TSYS in 2022 and NSCC-TJNankai University Large-scale Instrument Experimental Technology R&D Project(No.21NKSYJS09)。
文摘Unraveling the essence of electronic structure effected by d-d orbital coupling of transition metal and methanol oxidation reaction(MOR)performance can fundamentally guide high efficient catalyst design.Herein,density functional theory(DFT)calculations were performed at first to study the d–d orbital interaction of metallic Pt Pd Cu,revealing that the incorporation of Pd and Cu atoms into Pt system can enhance d-d electron interaction via capturing antibonding orbital electrons of Pt to fill the surrounding Pd and Cu atoms.Under the theoretical guidance,Pt Pd Cu medium entropy alloy aerogels(Pt Pd Cu MEAAs)catalysts have been designed and systematically screened for MOR under acid,alkaline and neutral electrolyte.Furthermore,DFT calculation and in-situ fourier transform infrared spectroscopy analysis indicate that Pt Pd Cu MEAAs follow the direct pathway via formate as the reactive intermediate to be directly oxidized to CO_(2).For practical direct methanol fuel cells(DMFCs),the Pt Pd Cu MEAAs-integrated ultra-thin catalyst layer(4–5μm thickness)as anode exhibits higher peak power density of 35 m W/cm^(2) than commercial Pt/C of 20 m W/cm^(2)(~40μm thickness)under the similar noble metal loading and an impressive stability retention at a 50-m A/cm^(2) constant current for 10 h.This work clearly proves that optimizing the intermediate adsorption capacity via d-d orbital coupling is an effective strategy to design highly efficient catalysts for DMFCs.
文摘The finite volume method was applied to numerically simulate the bottom pressure field induced by regular waves,vehicles in calm water and vehicles in regular waves.The solution of Navier-Stokes(N-S)equations in the vicinity of numerical wave tank's boundary was forced towards the wave theoretical solution by incorporating momentum source terms,thereby reducing adverse effects such as wave reflection.Simulations utilizing laminar flow,turbulent flow,and ideal fluid models were all found capable of effectively capturing the waveform and bottom pressure of regular waves,agreeing well with experimental data.In predicting the bottom pressure field of the submerged vehicle,turbulent simulations considering fluid viscosity and boundary layer development provided more accurate predictions for the stern region than inviscid simulations.Due to sphere's diffractive effect,the sphere's bottom pressure field in waves is not a linear superposition of the wave's and the sphere's bottom pressure field.However,a slender submerged vehicle exhibits a weaker diffractive effect on waves,thus the submerged vehicle's bottom pressure field in waves can be approximated as a linear superposition of the wave's and the submerged vehicle's bottom pressure field,which simplifies computation and analysis.
文摘This paper delves into the baseline design under the baseline parameterization model in experimental design, focusing on the relationship between the K-aberration criterion and the word length pattern (WLP) of regular two-level designs. The paper provides a detailed analysis of the relationship between K5and the WLP for regular two-level designs with resolution t=3, and proposes corresponding theoretical results. These results not only theoretically reveal the connection between the orthogonal parameterization model and the baseline parameterization model but also provide theoretical support for finding the K-aberration optimal regular two-level baseline designs. It demonstrates how to apply these theories to evaluate and select the optimal experimental designs. In practical applications, experimental designers can utilize the theoretical results of this paper to quickly assess and select regular two-level baseline designs with minimal K-aberration by analyzing the WLP of the experimental design. This allows for the identification of key factors that significantly affect the experimental outcomes without frequently changing the factor levels, thereby maximizing the benefits of the experiment.
基金Supported by the National Natural Science Foundation of China(12001327,12071057)。
文摘Toric patch is a kind of rational multisided patch,which is associated with a finite integer lattice points set A.A set of weights is defined which depend on a parameter according to regular decomposition of A.When all weights of the patch tend to infinity,we obtain the limiting form of toric patch which is called its regular control surface.The diferent weights may induce the diferent regular control surfaces of the same toric patch.It prompts us to consider that how many regular control surfaces of a toric patch.In this paper,we study the regular decompositions of A by using integer programming method firstly,and then provide the relationship between all regular decompositions of A and corresponding state polytope.Moreover,we present that the number of regular control surfaces of a toric patch associated with A is equal to the number of regular decompositions of A.An algorithm to calculate the number of regular control surfaces of toric patch is provided.The algorithm also presents a method to construct all of the regular control surfaces of a toric patch.At last,the application of proposed result in shape deformation is demonstrated by several examples.
基金supported by the National Science Fund for Distinguished Young Scholarship(No.62025602)National Natural Science Foundation of China(Nos.U22B2036,11931015)+2 种基金the Fok Ying-Tong Education Foundation China(No.171105)the Fundamental Research Funds for the Central Universities(No.G2024WD0151)in part by the Tencent Foundation and XPLORER PRIZE.
文摘In this study,we present a deterministic convergence analysis of Gated Recurrent Unit(GRU)networks enhanced by a smoothing L_(1)regularization technique.While GRU architectures effectively mitigate gradient vanishing/exploding issues in sequential modeling,they remain prone to overfitting,particularly under noisy or limited training data.Traditional L_(1)regularization,despite enforcing sparsity and accelerating optimization,introduces non-differentiable points in the error function,leading to oscillations during training.To address this,we propose a novel smoothing L_(1)regularization framework that replaces the non-differentiable absolute function with a quadratic approximation,ensuring gradient continuity and stabilizing the optimization landscape.Theoretically,we rigorously establish threekey properties of the resulting smoothing L_(1)-regularizedGRU(SL_(1)-GRU)model:(1)monotonic decrease of the error function across iterations,(2)weak convergence characterized by vanishing gradients as iterations approach infinity,and(3)strong convergence of network weights to fixed points under finite conditions.Comprehensive experiments on benchmark datasets-spanning function approximation,classification(KDD Cup 1999 Data,MNIST),and regression tasks(Boston Housing,Energy Efficiency)-demonstrate SL_(1)-GRUs superiority over baseline models(RNN,LSTM,GRU,L_(1)-GRU,L2-GRU).Empirical results reveal that SL_(1)-GRU achieves 1.0%-2.4%higher test accuracy in classification,7.8%-15.4%lower mean squared error in regression compared to unregularized GRU,while reducing training time by 8.7%-20.1%.These outcomes validate the method’s efficacy in balancing computational efficiency and generalization capability,and they strongly corroborate the theoretical calculations.The proposed framework not only resolves the non-differentiability challenge of L_(1)regularization but also provides a theoretical foundation for convergence guarantees in recurrent neural network training.
文摘We use the Schrödinger–Newton equation to calculate the regularized self-energy of a particle using a regular self-gravitational and electrostatic potential derived in string T-duality.The particle mass M is no longer concentrated into a point but is diluted and described by a quantum-corrected smeared energy density resulting in corrections to the energy of the particle,which is interpreted as a regularized self-energy.We extend our results and find corrections to the relativistic particles using the Klein–Gordon,Proca and Dirac equations.An important finding is that we extract a form of the generalized uncertainty principle(GUP)from the corrected energy.This form of the GUP is shown to depend on the nature of particles;namely,for bosons(spin 0 and spin 1)we obtain a quadratic form of the GUP,while for fermions(spin 1/2)we obtain a linear form.The correlation we find between spin and GUP may offer insights for investigating quantum gravity.
基金funded by the National Key R&D Program of China(Grant no.2018YFA0702504)the Sinopec research project(P22162).
文摘Absorption compensation is a process involving the exponential amplification of reflection amplitudes.This process amplifies the seismic signal and noise,thereby substantially reducing the signal-tonoise ratio of seismic data.Therefore,this paper proposes a multichannel inversion absorption compensation method based on structure tensor regularization.First,the structure tensor is utilized to extract the spatial inclination of seismic signals,and the spatial prediction filter is designed along the inclination direction.The spatial prediction filter is then introduced into the regularization condition of multichannel inversion absorption compensation,and the absorption compensation is realized under the framework of multichannel inversion theory.The spatial predictability of seismic signals is also introduced into the objective function of absorption compensation inversion.Thus,the inversion system can effectively suppress the noise amplification effect during absorption compensation and improve the recovery accuracy of high-frequency signals.Synthetic and field data tests are conducted to demonstrate the accuracy and effectiveness of the proposed method.
基金supported by the National Natural Science Foundation of China(No.41804141)。
文摘Energy resolution calibration is crucial for gamma-ray spectral analysis,as measured using a scintillation detector.A locally constrained regularization method was proposed to determine the resolution calibration parameters.First,a Monte Carlo simulation model consistent with an actual measurement system was constructed to obtain the energy deposition distribution in the scintillation crystal.Subsequently,the regularization objective function is established based on weighted least squares and additional constraints.Additional constraints were designed using a special weighting scheme based on the incident gamma-ray energies.Subsequently,an intelligent algorithm was introduced to search for the optimal resolution calibration parameters by minimizing the objective function.The most appropriate regularization parameter was determined through mathematical experiments.When the regularization parameter was 30,the calibrated results exhibited the minimum RMSE.Simulations and test pit experiments were conducted to verify the performance of the proposed method.The simulation results demonstrate that the proposed algorithm can determine resolution calibration parameters more accurately than the traditional weighted least squares,and the test pit experimental results show that the R-squares between the calibrated and measured spectra are larger than 0.99.The accurate resolution calibration parameters determined by the proposed method lay the foundation for gamma-ray spectral processing and simulation benchmarking.
基金supported by the Natural Science Foundation of Sichuan Province of China under Grant No.2025ZNSFSC0522partially supported by the National Natural Science Foundation of China under Grants No.61775030 and No.61571096.
文摘Target tracking is an essential task in contemporary computer vision applications.However,its effectiveness is susceptible to model drift,due to the different appearances of targets,which often compromises tracking robustness and precision.In this paper,a universally applicable method based on correlation filters is introduced to mitigate model drift in complex scenarios.It employs temporal-confidence samples as a priori to guide the model update process and ensure its precision and consistency over a long period.An improved update mechanism based on the peak side-lobe to peak correlation energy(PSPCE)criterion is proposed,which selects high-confidence samples along the temporal dimension to update temporal-confidence samples.Extensive experiments on various benchmarks demonstrate that the proposed method achieves a competitive performance compared with the state-of-the-art methods.Especially when the target appearance changes significantly,our method is more robust and can achieve a balance between precision and speed.Specifically,on the object tracking benchmark(OTB-100)dataset,compared to the baseline,the tracking precision of our model improves by 8.8%,8.8%,5.1%,5.6%,and 6.9%for background clutter,deformation,occlusion,rotation,and illumination variation,respectively.The results indicate that this proposed method can significantly enhance the robustness and precision of target tracking in dynamic and challenging environments,offering a reliable solution for applications such as real-time monitoring,autonomous driving,and precision guidance.