In this paper we classify regular p-groups with type invariants (e, 1, 1, 1) for e ≥ 2 and (1, 1, 1, 1, 1). As a by-product, we give a new approach to the classification of groups of order p5, p ≥ 5 a prime.
Multi-label feature selection(MFS)is a crucial dimensionality reduction technique aimed at identifying informative features associated with multiple labels.However,traditional centralized methods face significant chal...Multi-label feature selection(MFS)is a crucial dimensionality reduction technique aimed at identifying informative features associated with multiple labels.However,traditional centralized methods face significant challenges in privacy-sensitive and distributed settings,often neglecting label dependencies and suffering from low computational efficiency.To address these issues,we introduce a novel framework,Fed-MFSDHBCPSO—federated MFS via dual-layer hybrid breeding cooperative particle swarm optimization algorithm with manifold and sparsity regularization(DHBCPSO-MSR).Leveraging the federated learning paradigm,Fed-MFSDHBCPSO allows clients to perform local feature selection(FS)using DHBCPSO-MSR.Locally selected feature subsets are encrypted with differential privacy(DP)and transmitted to a central server,where they are securely aggregated and refined through secure multi-party computation(SMPC)until global convergence is achieved.Within each client,DHBCPSO-MSR employs a dual-layer FS strategy.The inner layer constructs sample and label similarity graphs,generates Laplacian matrices to capture the manifold structure between samples and labels,and applies L2,1-norm regularization to sparsify the feature subset,yielding an optimized feature weight matrix.The outer layer uses a hybrid breeding cooperative particle swarm optimization algorithm to further refine the feature weight matrix and identify the optimal feature subset.The updated weight matrix is then fed back to the inner layer for further optimization.Comprehensive experiments on multiple real-world multi-label datasets demonstrate that Fed-MFSDHBCPSO consistently outperforms both centralized and federated baseline methods across several key evaluation metrics.展开更多
In the paper we obtain two infinite classes of p-groups, calculate the orders of their automorphism groups and correct a mistake(perhaps misprinted) of Rodney James' paper in 1980.
A p-group G is called a JC-group if the normal closure HG of every cyclic subgroup H satisfies|G:HG|≤p or|HG:H|≤p.In this paper, we classify the non-Dedekindian JC-groups for p 〉 2.
Let p be a prime number and f_2(G) be the number of factorizations G = AB of the group G, where A, B are subgroups of G. Let G be a class of finite p-groups as follows,G = a, b | a^(p^n)= b^(p^m)= 1, a^b= a^(p^(n-1)+1...Let p be a prime number and f_2(G) be the number of factorizations G = AB of the group G, where A, B are subgroups of G. Let G be a class of finite p-groups as follows,G = a, b | a^(p^n)= b^(p^m)= 1, a^b= a^(p^(n-1)+1), where n > m ≥ 1. In this article, the factorization number f_2(G) of G is computed, improving the results of Saeedi and Farrokhi in [5].展开更多
In this paper, we study the basis of augmentation ideals and the quotient groups of finite non-abelian p-group which has a cyclic subgroup of index p, where p is an odd prime, and k is greater than or equal to 3. A co...In this paper, we study the basis of augmentation ideals and the quotient groups of finite non-abelian p-group which has a cyclic subgroup of index p, where p is an odd prime, and k is greater than or equal to 3. A concrete basis for the augmentation ideal is obtained and then the structure of its quotient groups can be determined.展开更多
For any prime p, all finite noncyclic p-groups which contain a self-centralizing cyclic normal subgroup are determined by using cohomological techniques. Some applications are given, including a character theoretic de...For any prime p, all finite noncyclic p-groups which contain a self-centralizing cyclic normal subgroup are determined by using cohomological techniques. Some applications are given, including a character theoretic description for such groups.展开更多
Let G be a group and A and B be subgroups of G.If G=AB,then G is said to be factorized by A and B.Let p be a prime number.The factorization numbers of a 2-generators abelian p-group and a modular p-group have been det...Let G be a group and A and B be subgroups of G.If G=AB,then G is said to be factorized by A and B.Let p be a prime number.The factorization numbers of a 2-generators abelian p-group and a modular p-group have been determined.Further,suppose that G is a finite p-group as follows G=<a,b|a^(p)^(n)=b^(p)^(m)=1,a^(b)=a^(p^(n-1)+1)>,where n≥2,m≥1.In this paper,the factorization number of G is computed completely,which is a generalization of the result of Saeedi and Farrokhi.展开更多
The finite volume method was applied to numerically simulate the bottom pressure field induced by regular waves,vehicles in calm water and vehicles in regular waves.The solution of Navier-Stokes(N-S)equations in the v...The finite volume method was applied to numerically simulate the bottom pressure field induced by regular waves,vehicles in calm water and vehicles in regular waves.The solution of Navier-Stokes(N-S)equations in the vicinity of numerical wave tank's boundary was forced towards the wave theoretical solution by incorporating momentum source terms,thereby reducing adverse effects such as wave reflection.Simulations utilizing laminar flow,turbulent flow,and ideal fluid models were all found capable of effectively capturing the waveform and bottom pressure of regular waves,agreeing well with experimental data.In predicting the bottom pressure field of the submerged vehicle,turbulent simulations considering fluid viscosity and boundary layer development provided more accurate predictions for the stern region than inviscid simulations.Due to sphere's diffractive effect,the sphere's bottom pressure field in waves is not a linear superposition of the wave's and the sphere's bottom pressure field.However,a slender submerged vehicle exhibits a weaker diffractive effect on waves,thus the submerged vehicle's bottom pressure field in waves can be approximated as a linear superposition of the wave's and the submerged vehicle's bottom pressure field,which simplifies computation and analysis.展开更多
This paper delves into the baseline design under the baseline parameterization model in experimental design, focusing on the relationship between the K-aberration criterion and the word length pattern (WLP) of regular...This paper delves into the baseline design under the baseline parameterization model in experimental design, focusing on the relationship between the K-aberration criterion and the word length pattern (WLP) of regular two-level designs. The paper provides a detailed analysis of the relationship between K5and the WLP for regular two-level designs with resolution t=3, and proposes corresponding theoretical results. These results not only theoretically reveal the connection between the orthogonal parameterization model and the baseline parameterization model but also provide theoretical support for finding the K-aberration optimal regular two-level baseline designs. It demonstrates how to apply these theories to evaluate and select the optimal experimental designs. In practical applications, experimental designers can utilize the theoretical results of this paper to quickly assess and select regular two-level baseline designs with minimal K-aberration by analyzing the WLP of the experimental design. This allows for the identification of key factors that significantly affect the experimental outcomes without frequently changing the factor levels, thereby maximizing the benefits of the experiment.展开更多
Toric patch is a kind of rational multisided patch,which is associated with a finite integer lattice points set A.A set of weights is defined which depend on a parameter according to regular decomposition of A.When al...Toric patch is a kind of rational multisided patch,which is associated with a finite integer lattice points set A.A set of weights is defined which depend on a parameter according to regular decomposition of A.When all weights of the patch tend to infinity,we obtain the limiting form of toric patch which is called its regular control surface.The diferent weights may induce the diferent regular control surfaces of the same toric patch.It prompts us to consider that how many regular control surfaces of a toric patch.In this paper,we study the regular decompositions of A by using integer programming method firstly,and then provide the relationship between all regular decompositions of A and corresponding state polytope.Moreover,we present that the number of regular control surfaces of a toric patch associated with A is equal to the number of regular decompositions of A.An algorithm to calculate the number of regular control surfaces of toric patch is provided.The algorithm also presents a method to construct all of the regular control surfaces of a toric patch.At last,the application of proposed result in shape deformation is demonstrated by several examples.展开更多
In this study,we present a deterministic convergence analysis of Gated Recurrent Unit(GRU)networks enhanced by a smoothing L_(1)regularization technique.While GRU architectures effectively mitigate gradient vanishing/...In this study,we present a deterministic convergence analysis of Gated Recurrent Unit(GRU)networks enhanced by a smoothing L_(1)regularization technique.While GRU architectures effectively mitigate gradient vanishing/exploding issues in sequential modeling,they remain prone to overfitting,particularly under noisy or limited training data.Traditional L_(1)regularization,despite enforcing sparsity and accelerating optimization,introduces non-differentiable points in the error function,leading to oscillations during training.To address this,we propose a novel smoothing L_(1)regularization framework that replaces the non-differentiable absolute function with a quadratic approximation,ensuring gradient continuity and stabilizing the optimization landscape.Theoretically,we rigorously establish threekey properties of the resulting smoothing L_(1)-regularizedGRU(SL_(1)-GRU)model:(1)monotonic decrease of the error function across iterations,(2)weak convergence characterized by vanishing gradients as iterations approach infinity,and(3)strong convergence of network weights to fixed points under finite conditions.Comprehensive experiments on benchmark datasets-spanning function approximation,classification(KDD Cup 1999 Data,MNIST),and regression tasks(Boston Housing,Energy Efficiency)-demonstrate SL_(1)-GRUs superiority over baseline models(RNN,LSTM,GRU,L_(1)-GRU,L2-GRU).Empirical results reveal that SL_(1)-GRU achieves 1.0%-2.4%higher test accuracy in classification,7.8%-15.4%lower mean squared error in regression compared to unregularized GRU,while reducing training time by 8.7%-20.1%.These outcomes validate the method’s efficacy in balancing computational efficiency and generalization capability,and they strongly corroborate the theoretical calculations.The proposed framework not only resolves the non-differentiability challenge of L_(1)regularization but also provides a theoretical foundation for convergence guarantees in recurrent neural network training.展开更多
We use the Schrödinger–Newton equation to calculate the regularized self-energy of a particle using a regular self-gravitational and electrostatic potential derived in string T-duality.The particle mass M is no ...We use the Schrödinger–Newton equation to calculate the regularized self-energy of a particle using a regular self-gravitational and electrostatic potential derived in string T-duality.The particle mass M is no longer concentrated into a point but is diluted and described by a quantum-corrected smeared energy density resulting in corrections to the energy of the particle,which is interpreted as a regularized self-energy.We extend our results and find corrections to the relativistic particles using the Klein–Gordon,Proca and Dirac equations.An important finding is that we extract a form of the generalized uncertainty principle(GUP)from the corrected energy.This form of the GUP is shown to depend on the nature of particles;namely,for bosons(spin 0 and spin 1)we obtain a quadratic form of the GUP,while for fermions(spin 1/2)we obtain a linear form.The correlation we find between spin and GUP may offer insights for investigating quantum gravity.展开更多
Absorption compensation is a process involving the exponential amplification of reflection amplitudes.This process amplifies the seismic signal and noise,thereby substantially reducing the signal-tonoise ratio of seis...Absorption compensation is a process involving the exponential amplification of reflection amplitudes.This process amplifies the seismic signal and noise,thereby substantially reducing the signal-tonoise ratio of seismic data.Therefore,this paper proposes a multichannel inversion absorption compensation method based on structure tensor regularization.First,the structure tensor is utilized to extract the spatial inclination of seismic signals,and the spatial prediction filter is designed along the inclination direction.The spatial prediction filter is then introduced into the regularization condition of multichannel inversion absorption compensation,and the absorption compensation is realized under the framework of multichannel inversion theory.The spatial predictability of seismic signals is also introduced into the objective function of absorption compensation inversion.Thus,the inversion system can effectively suppress the noise amplification effect during absorption compensation and improve the recovery accuracy of high-frequency signals.Synthetic and field data tests are conducted to demonstrate the accuracy and effectiveness of the proposed method.展开更多
Energy resolution calibration is crucial for gamma-ray spectral analysis,as measured using a scintillation detector.A locally constrained regularization method was proposed to determine the resolution calibration para...Energy resolution calibration is crucial for gamma-ray spectral analysis,as measured using a scintillation detector.A locally constrained regularization method was proposed to determine the resolution calibration parameters.First,a Monte Carlo simulation model consistent with an actual measurement system was constructed to obtain the energy deposition distribution in the scintillation crystal.Subsequently,the regularization objective function is established based on weighted least squares and additional constraints.Additional constraints were designed using a special weighting scheme based on the incident gamma-ray energies.Subsequently,an intelligent algorithm was introduced to search for the optimal resolution calibration parameters by minimizing the objective function.The most appropriate regularization parameter was determined through mathematical experiments.When the regularization parameter was 30,the calibrated results exhibited the minimum RMSE.Simulations and test pit experiments were conducted to verify the performance of the proposed method.The simulation results demonstrate that the proposed algorithm can determine resolution calibration parameters more accurately than the traditional weighted least squares,and the test pit experimental results show that the R-squares between the calibrated and measured spectra are larger than 0.99.The accurate resolution calibration parameters determined by the proposed method lay the foundation for gamma-ray spectral processing and simulation benchmarking.展开更多
Target tracking is an essential task in contemporary computer vision applications.However,its effectiveness is susceptible to model drift,due to the different appearances of targets,which often compromises tracking ro...Target tracking is an essential task in contemporary computer vision applications.However,its effectiveness is susceptible to model drift,due to the different appearances of targets,which often compromises tracking robustness and precision.In this paper,a universally applicable method based on correlation filters is introduced to mitigate model drift in complex scenarios.It employs temporal-confidence samples as a priori to guide the model update process and ensure its precision and consistency over a long period.An improved update mechanism based on the peak side-lobe to peak correlation energy(PSPCE)criterion is proposed,which selects high-confidence samples along the temporal dimension to update temporal-confidence samples.Extensive experiments on various benchmarks demonstrate that the proposed method achieves a competitive performance compared with the state-of-the-art methods.Especially when the target appearance changes significantly,our method is more robust and can achieve a balance between precision and speed.Specifically,on the object tracking benchmark(OTB-100)dataset,compared to the baseline,the tracking precision of our model improves by 8.8%,8.8%,5.1%,5.6%,and 6.9%for background clutter,deformation,occlusion,rotation,and illumination variation,respectively.The results indicate that this proposed method can significantly enhance the robustness and precision of target tracking in dynamic and challenging environments,offering a reliable solution for applications such as real-time monitoring,autonomous driving,and precision guidance.展开更多
Transfer-based Adversarial Attacks(TAAs)can deceive a victim model even without prior knowledge.This is achieved by leveraging the property of adversarial examples.That is,when generated from a surrogate model,they re...Transfer-based Adversarial Attacks(TAAs)can deceive a victim model even without prior knowledge.This is achieved by leveraging the property of adversarial examples.That is,when generated from a surrogate model,they retain their features if applied to other models due to their good transferability.However,adversarial examples often exhibit overfitting,as they are tailored to exploit the particular architecture and feature representation of source models.Consequently,when attempting black-box transfer attacks on different target models,their effectiveness is decreased.To solve this problem,this study proposes an approach based on a Regularized Constrained Feature Layer(RCFL).The proposed method first uses regularization constraints to attenuate the initial examples of low-frequency components.Perturbations are then added to a pre-specified layer of the source model using the back-propagation technique,in order to modify the original adversarial examples.Afterward,a regularized loss function is used to enhance the black-box transferability between different target models.The proposed method is finally tested on the ImageNet,CIFAR-100,and Stanford Car datasets with various target models,The obtained results demonstrate that it achieves a significantly higher transfer-based adversarial attack success rate compared with baseline techniques.展开更多
We investigate a sufficient condition,in terms of the azimuthal componentω^(θ)ofω=curl u in cylindrical coordinates,for the regularity of axisymmetric weak solutions to the 3D incompressible Navier-Stokes equations...We investigate a sufficient condition,in terms of the azimuthal componentω^(θ)ofω=curl u in cylindrical coordinates,for the regularity of axisymmetric weak solutions to the 3D incompressible Navier-Stokes equations.More precisely,we prove that if■,then the weak solution u is actually a regular solution.Similar regularity criterion still holds in the homogeneous Triebel-Lizorkin spaces.展开更多
基金supported by the National Natural Science Founda tion of China(Grant Nos.10371003&10471085)Natural Science Foundation of Beijing 1052005)+2 种基金Natural Science Foundation of Shanxi Province(Grant No.20051007)Key Project of Ministry of Education(Grant No.02023)The Returned Abroad-Student Found of Shanxi Province(Grant No.[2004]7).
文摘In this paper we classify regular p-groups with type invariants (e, 1, 1, 1) for e ≥ 2 and (1, 1, 1, 1, 1). As a by-product, we give a new approach to the classification of groups of order p5, p ≥ 5 a prime.
基金Supported by the National Natural Science Foundation of China (Grant No. 11071150)the Natural Science Foundation of Shanxi Province (Grant No. 2012011001-3)Shanxi Scholarship Council of China (Grant No. [2011]8-59)
文摘We classify up to isomorphism those finite p-groups, for odd primes p, which contain a cyclic subgroup of index p3.
文摘Multi-label feature selection(MFS)is a crucial dimensionality reduction technique aimed at identifying informative features associated with multiple labels.However,traditional centralized methods face significant challenges in privacy-sensitive and distributed settings,often neglecting label dependencies and suffering from low computational efficiency.To address these issues,we introduce a novel framework,Fed-MFSDHBCPSO—federated MFS via dual-layer hybrid breeding cooperative particle swarm optimization algorithm with manifold and sparsity regularization(DHBCPSO-MSR).Leveraging the federated learning paradigm,Fed-MFSDHBCPSO allows clients to perform local feature selection(FS)using DHBCPSO-MSR.Locally selected feature subsets are encrypted with differential privacy(DP)and transmitted to a central server,where they are securely aggregated and refined through secure multi-party computation(SMPC)until global convergence is achieved.Within each client,DHBCPSO-MSR employs a dual-layer FS strategy.The inner layer constructs sample and label similarity graphs,generates Laplacian matrices to capture the manifold structure between samples and labels,and applies L2,1-norm regularization to sparsify the feature subset,yielding an optimized feature weight matrix.The outer layer uses a hybrid breeding cooperative particle swarm optimization algorithm to further refine the feature weight matrix and identify the optimal feature subset.The updated weight matrix is then fed back to the inner layer for further optimization.Comprehensive experiments on multiple real-world multi-label datasets demonstrate that Fed-MFSDHBCPSO consistently outperforms both centralized and federated baseline methods across several key evaluation metrics.
基金Supported by NNSF of China(60574052)Supported by NSF(05001820)Supported by PST of Guangdong(2005B33301008)
文摘In the paper we obtain two infinite classes of p-groups, calculate the orders of their automorphism groups and correct a mistake(perhaps misprinted) of Rodney James' paper in 1980.
基金Supported by the National Natural Science Foundation of China(Grant Nos.1152611411601245)+3 种基金the Natural Science Foundation of Guangdong Province(Grant No.2015A030313791)the Innovative Team Project of Guangdong Province(CHINA)(Grant No.2014KTSCX196)Guangdong Province Innovation Talent Project for Youths(Grant No.2015KQNCX107)the Appropriative Researching Fund for Doctors,Guangdong University of Education(Grant No.2013ARF07)
文摘A p-group G is called a JC-group if the normal closure HG of every cyclic subgroup H satisfies|G:HG|≤p or|HG:H|≤p.In this paper, we classify the non-Dedekindian JC-groups for p 〉 2.
基金Supported by National Natural Science Foundation of China(11601121)Henan Provincial Natural Science Foundation of China(162300410066)
文摘Let p be a prime number and f_2(G) be the number of factorizations G = AB of the group G, where A, B are subgroups of G. Let G be a class of finite p-groups as follows,G = a, b | a^(p^n)= b^(p^m)= 1, a^b= a^(p^(n-1)+1), where n > m ≥ 1. In this article, the factorization number f_2(G) of G is computed, improving the results of Saeedi and Farrokhi in [5].
文摘In this paper, we study the basis of augmentation ideals and the quotient groups of finite non-abelian p-group which has a cyclic subgroup of index p, where p is an odd prime, and k is greater than or equal to 3. A concrete basis for the augmentation ideal is obtained and then the structure of its quotient groups can be determined.
基金Supported by the NSF of China(11171194)by the NSF of Shanxi Province(2012011001-1)
文摘For any prime p, all finite noncyclic p-groups which contain a self-centralizing cyclic normal subgroup are determined by using cohomological techniques. Some applications are given, including a character theoretic description for such groups.
基金Supported by National Natural Science Foundation of China(Grant No.11601121,12171142).
文摘Let G be a group and A and B be subgroups of G.If G=AB,then G is said to be factorized by A and B.Let p be a prime number.The factorization numbers of a 2-generators abelian p-group and a modular p-group have been determined.Further,suppose that G is a finite p-group as follows G=<a,b|a^(p)^(n)=b^(p)^(m)=1,a^(b)=a^(p^(n-1)+1)>,where n≥2,m≥1.In this paper,the factorization number of G is computed completely,which is a generalization of the result of Saeedi and Farrokhi.
文摘The finite volume method was applied to numerically simulate the bottom pressure field induced by regular waves,vehicles in calm water and vehicles in regular waves.The solution of Navier-Stokes(N-S)equations in the vicinity of numerical wave tank's boundary was forced towards the wave theoretical solution by incorporating momentum source terms,thereby reducing adverse effects such as wave reflection.Simulations utilizing laminar flow,turbulent flow,and ideal fluid models were all found capable of effectively capturing the waveform and bottom pressure of regular waves,agreeing well with experimental data.In predicting the bottom pressure field of the submerged vehicle,turbulent simulations considering fluid viscosity and boundary layer development provided more accurate predictions for the stern region than inviscid simulations.Due to sphere's diffractive effect,the sphere's bottom pressure field in waves is not a linear superposition of the wave's and the sphere's bottom pressure field.However,a slender submerged vehicle exhibits a weaker diffractive effect on waves,thus the submerged vehicle's bottom pressure field in waves can be approximated as a linear superposition of the wave's and the submerged vehicle's bottom pressure field,which simplifies computation and analysis.
文摘This paper delves into the baseline design under the baseline parameterization model in experimental design, focusing on the relationship between the K-aberration criterion and the word length pattern (WLP) of regular two-level designs. The paper provides a detailed analysis of the relationship between K5and the WLP for regular two-level designs with resolution t=3, and proposes corresponding theoretical results. These results not only theoretically reveal the connection between the orthogonal parameterization model and the baseline parameterization model but also provide theoretical support for finding the K-aberration optimal regular two-level baseline designs. It demonstrates how to apply these theories to evaluate and select the optimal experimental designs. In practical applications, experimental designers can utilize the theoretical results of this paper to quickly assess and select regular two-level baseline designs with minimal K-aberration by analyzing the WLP of the experimental design. This allows for the identification of key factors that significantly affect the experimental outcomes without frequently changing the factor levels, thereby maximizing the benefits of the experiment.
基金Supported by the National Natural Science Foundation of China(12001327,12071057)。
文摘Toric patch is a kind of rational multisided patch,which is associated with a finite integer lattice points set A.A set of weights is defined which depend on a parameter according to regular decomposition of A.When all weights of the patch tend to infinity,we obtain the limiting form of toric patch which is called its regular control surface.The diferent weights may induce the diferent regular control surfaces of the same toric patch.It prompts us to consider that how many regular control surfaces of a toric patch.In this paper,we study the regular decompositions of A by using integer programming method firstly,and then provide the relationship between all regular decompositions of A and corresponding state polytope.Moreover,we present that the number of regular control surfaces of a toric patch associated with A is equal to the number of regular decompositions of A.An algorithm to calculate the number of regular control surfaces of toric patch is provided.The algorithm also presents a method to construct all of the regular control surfaces of a toric patch.At last,the application of proposed result in shape deformation is demonstrated by several examples.
基金supported by the National Science Fund for Distinguished Young Scholarship(No.62025602)National Natural Science Foundation of China(Nos.U22B2036,11931015)+2 种基金the Fok Ying-Tong Education Foundation China(No.171105)the Fundamental Research Funds for the Central Universities(No.G2024WD0151)in part by the Tencent Foundation and XPLORER PRIZE.
文摘In this study,we present a deterministic convergence analysis of Gated Recurrent Unit(GRU)networks enhanced by a smoothing L_(1)regularization technique.While GRU architectures effectively mitigate gradient vanishing/exploding issues in sequential modeling,they remain prone to overfitting,particularly under noisy or limited training data.Traditional L_(1)regularization,despite enforcing sparsity and accelerating optimization,introduces non-differentiable points in the error function,leading to oscillations during training.To address this,we propose a novel smoothing L_(1)regularization framework that replaces the non-differentiable absolute function with a quadratic approximation,ensuring gradient continuity and stabilizing the optimization landscape.Theoretically,we rigorously establish threekey properties of the resulting smoothing L_(1)-regularizedGRU(SL_(1)-GRU)model:(1)monotonic decrease of the error function across iterations,(2)weak convergence characterized by vanishing gradients as iterations approach infinity,and(3)strong convergence of network weights to fixed points under finite conditions.Comprehensive experiments on benchmark datasets-spanning function approximation,classification(KDD Cup 1999 Data,MNIST),and regression tasks(Boston Housing,Energy Efficiency)-demonstrate SL_(1)-GRUs superiority over baseline models(RNN,LSTM,GRU,L_(1)-GRU,L2-GRU).Empirical results reveal that SL_(1)-GRU achieves 1.0%-2.4%higher test accuracy in classification,7.8%-15.4%lower mean squared error in regression compared to unregularized GRU,while reducing training time by 8.7%-20.1%.These outcomes validate the method’s efficacy in balancing computational efficiency and generalization capability,and they strongly corroborate the theoretical calculations.The proposed framework not only resolves the non-differentiability challenge of L_(1)regularization but also provides a theoretical foundation for convergence guarantees in recurrent neural network training.
文摘We use the Schrödinger–Newton equation to calculate the regularized self-energy of a particle using a regular self-gravitational and electrostatic potential derived in string T-duality.The particle mass M is no longer concentrated into a point but is diluted and described by a quantum-corrected smeared energy density resulting in corrections to the energy of the particle,which is interpreted as a regularized self-energy.We extend our results and find corrections to the relativistic particles using the Klein–Gordon,Proca and Dirac equations.An important finding is that we extract a form of the generalized uncertainty principle(GUP)from the corrected energy.This form of the GUP is shown to depend on the nature of particles;namely,for bosons(spin 0 and spin 1)we obtain a quadratic form of the GUP,while for fermions(spin 1/2)we obtain a linear form.The correlation we find between spin and GUP may offer insights for investigating quantum gravity.
基金funded by the National Key R&D Program of China(Grant no.2018YFA0702504)the Sinopec research project(P22162).
文摘Absorption compensation is a process involving the exponential amplification of reflection amplitudes.This process amplifies the seismic signal and noise,thereby substantially reducing the signal-tonoise ratio of seismic data.Therefore,this paper proposes a multichannel inversion absorption compensation method based on structure tensor regularization.First,the structure tensor is utilized to extract the spatial inclination of seismic signals,and the spatial prediction filter is designed along the inclination direction.The spatial prediction filter is then introduced into the regularization condition of multichannel inversion absorption compensation,and the absorption compensation is realized under the framework of multichannel inversion theory.The spatial predictability of seismic signals is also introduced into the objective function of absorption compensation inversion.Thus,the inversion system can effectively suppress the noise amplification effect during absorption compensation and improve the recovery accuracy of high-frequency signals.Synthetic and field data tests are conducted to demonstrate the accuracy and effectiveness of the proposed method.
基金supported by the National Natural Science Foundation of China(No.41804141)。
文摘Energy resolution calibration is crucial for gamma-ray spectral analysis,as measured using a scintillation detector.A locally constrained regularization method was proposed to determine the resolution calibration parameters.First,a Monte Carlo simulation model consistent with an actual measurement system was constructed to obtain the energy deposition distribution in the scintillation crystal.Subsequently,the regularization objective function is established based on weighted least squares and additional constraints.Additional constraints were designed using a special weighting scheme based on the incident gamma-ray energies.Subsequently,an intelligent algorithm was introduced to search for the optimal resolution calibration parameters by minimizing the objective function.The most appropriate regularization parameter was determined through mathematical experiments.When the regularization parameter was 30,the calibrated results exhibited the minimum RMSE.Simulations and test pit experiments were conducted to verify the performance of the proposed method.The simulation results demonstrate that the proposed algorithm can determine resolution calibration parameters more accurately than the traditional weighted least squares,and the test pit experimental results show that the R-squares between the calibrated and measured spectra are larger than 0.99.The accurate resolution calibration parameters determined by the proposed method lay the foundation for gamma-ray spectral processing and simulation benchmarking.
基金supported by the Natural Science Foundation of Sichuan Province of China under Grant No.2025ZNSFSC0522partially supported by the National Natural Science Foundation of China under Grants No.61775030 and No.61571096.
文摘Target tracking is an essential task in contemporary computer vision applications.However,its effectiveness is susceptible to model drift,due to the different appearances of targets,which often compromises tracking robustness and precision.In this paper,a universally applicable method based on correlation filters is introduced to mitigate model drift in complex scenarios.It employs temporal-confidence samples as a priori to guide the model update process and ensure its precision and consistency over a long period.An improved update mechanism based on the peak side-lobe to peak correlation energy(PSPCE)criterion is proposed,which selects high-confidence samples along the temporal dimension to update temporal-confidence samples.Extensive experiments on various benchmarks demonstrate that the proposed method achieves a competitive performance compared with the state-of-the-art methods.Especially when the target appearance changes significantly,our method is more robust and can achieve a balance between precision and speed.Specifically,on the object tracking benchmark(OTB-100)dataset,compared to the baseline,the tracking precision of our model improves by 8.8%,8.8%,5.1%,5.6%,and 6.9%for background clutter,deformation,occlusion,rotation,and illumination variation,respectively.The results indicate that this proposed method can significantly enhance the robustness and precision of target tracking in dynamic and challenging environments,offering a reliable solution for applications such as real-time monitoring,autonomous driving,and precision guidance.
基金supported by the Intelligent Policing Key Laboratory of Sichuan Province(No.ZNJW2022KFZD002)This work was supported by the Scientific and Technological Research Program of Chongqing Municipal Education Commission(Grant Nos.KJQN202302403,KJQN202303111).
文摘Transfer-based Adversarial Attacks(TAAs)can deceive a victim model even without prior knowledge.This is achieved by leveraging the property of adversarial examples.That is,when generated from a surrogate model,they retain their features if applied to other models due to their good transferability.However,adversarial examples often exhibit overfitting,as they are tailored to exploit the particular architecture and feature representation of source models.Consequently,when attempting black-box transfer attacks on different target models,their effectiveness is decreased.To solve this problem,this study proposes an approach based on a Regularized Constrained Feature Layer(RCFL).The proposed method first uses regularization constraints to attenuate the initial examples of low-frequency components.Perturbations are then added to a pre-specified layer of the source model using the back-propagation technique,in order to modify the original adversarial examples.Afterward,a regularized loss function is used to enhance the black-box transferability between different target models.The proposed method is finally tested on the ImageNet,CIFAR-100,and Stanford Car datasets with various target models,The obtained results demonstrate that it achieves a significantly higher transfer-based adversarial attack success rate compared with baseline techniques.
基金Supported by the National Natural Science Foundation of China(12361034)the Natural Science Foundation of Shaanxi Province(2022JM-034)。
文摘We investigate a sufficient condition,in terms of the azimuthal componentω^(θ)ofω=curl u in cylindrical coordinates,for the regularity of axisymmetric weak solutions to the 3D incompressible Navier-Stokes equations.More precisely,we prove that if■,then the weak solution u is actually a regular solution.Similar regularity criterion still holds in the homogeneous Triebel-Lizorkin spaces.