The surge of large-scale models in recent years has led to breakthroughs in numerous fields,but it has also introduced higher computational costs and more complex network architectures.These increasingly large and int...The surge of large-scale models in recent years has led to breakthroughs in numerous fields,but it has also introduced higher computational costs and more complex network architectures.These increasingly large and intricate networks pose challenges for deployment and execution while also exacerbating the issue of network over-parameterization.To address this issue,various network compression techniques have been developed,such as network pruning.A typical pruning algorithm follows a three-step pipeline involving training,pruning,and retraining.Existing methods often directly set the pruned filters to zero during retraining,significantly reducing the parameter space.However,this direct pruning strategy frequently results in irreversible information loss.In the early stages of training,a network still contains much uncertainty,and evaluating filter importance may not be sufficiently rigorous.To manage the pruning process effectively,this paper proposes a flexible neural network pruning algorithm based on the logistic growth differential equation,considering the characteristics of network training.Unlike other pruning algorithms that directly reduce filter weights,this algorithm introduces a three-stage adaptive weight decay strategy inspired by the logistic growth differential equation.It employs a gentle decay rate in the initial training stage,a rapid decay rate during the intermediate stage,and a slower decay rate in the network convergence stage.Additionally,the decay rate is adjusted adaptively based on the filter weights at each stage.By controlling the adaptive decay rate at each stage,the pruning of neural network filters can be effectively managed.In experiments conducted on the CIFAR-10 and ILSVRC-2012 datasets,the pruning of neural networks significantly reduces the floating-point operations while maintaining the same pruning rate.Specifically,when implementing a 30%pruning rate on the ResNet-110 network,the pruned neural network not only decreases floating-point operations by 40.8%but also enhances the classification accuracy by 0.49%compared to the original network.展开更多
In this paper,we establish and study a single-species logistic model with impulsive age-selective harvesting.First,we prove the ultimate boundedness of the solutions of the system.Then,we obtain conditions for the asy...In this paper,we establish and study a single-species logistic model with impulsive age-selective harvesting.First,we prove the ultimate boundedness of the solutions of the system.Then,we obtain conditions for the asymptotic stability of the trivial solution and the positive periodic solution.Finally,numerical simulations are presented to validate our results.Our results show that age-selective harvesting is more conducive to sustainable population survival than non-age-selective harvesting.展开更多
In this paper,we consider the fourth-order parabolic equation with p(x)Laplacian and variable exponent source ut+∆^(2)u−div(|■u|^(p(x)−2■u))=|u|^(q(x))−1u.By applying potential well method,we obtain global existence...In this paper,we consider the fourth-order parabolic equation with p(x)Laplacian and variable exponent source ut+∆^(2)u−div(|■u|^(p(x)−2■u))=|u|^(q(x))−1u.By applying potential well method,we obtain global existence,asymptotic behavior and blow-up of solutions with initial energy J(u_(0))≤d.Moreover,we estimate the upper bound of the blow-up time for J(u_(0))≤0.展开更多
This paper deals with Mckean-Vlasov backward stochastic differential equations with weak monotonicity coefficients.We first establish the existence and uniqueness of solutions to Mckean-Vlasov backward stochastic diff...This paper deals with Mckean-Vlasov backward stochastic differential equations with weak monotonicity coefficients.We first establish the existence and uniqueness of solutions to Mckean-Vlasov backward stochastic differential equations.Then we obtain a comparison theorem in one-dimensional situation.展开更多
基金supported by the National Natural Science Foundation of China under Grant No.62172132.
文摘The surge of large-scale models in recent years has led to breakthroughs in numerous fields,but it has also introduced higher computational costs and more complex network architectures.These increasingly large and intricate networks pose challenges for deployment and execution while also exacerbating the issue of network over-parameterization.To address this issue,various network compression techniques have been developed,such as network pruning.A typical pruning algorithm follows a three-step pipeline involving training,pruning,and retraining.Existing methods often directly set the pruned filters to zero during retraining,significantly reducing the parameter space.However,this direct pruning strategy frequently results in irreversible information loss.In the early stages of training,a network still contains much uncertainty,and evaluating filter importance may not be sufficiently rigorous.To manage the pruning process effectively,this paper proposes a flexible neural network pruning algorithm based on the logistic growth differential equation,considering the characteristics of network training.Unlike other pruning algorithms that directly reduce filter weights,this algorithm introduces a three-stage adaptive weight decay strategy inspired by the logistic growth differential equation.It employs a gentle decay rate in the initial training stage,a rapid decay rate during the intermediate stage,and a slower decay rate in the network convergence stage.Additionally,the decay rate is adjusted adaptively based on the filter weights at each stage.By controlling the adaptive decay rate at each stage,the pruning of neural network filters can be effectively managed.In experiments conducted on the CIFAR-10 and ILSVRC-2012 datasets,the pruning of neural networks significantly reduces the floating-point operations while maintaining the same pruning rate.Specifically,when implementing a 30%pruning rate on the ResNet-110 network,the pruned neural network not only decreases floating-point operations by 40.8%but also enhances the classification accuracy by 0.49%compared to the original network.
基金Supported by the National Natural Science Foundation of China(12261018)Universities Key Laboratory of Mathematical Modeling and Data Mining in Guizhou Province(2023013)。
文摘In this paper,we establish and study a single-species logistic model with impulsive age-selective harvesting.First,we prove the ultimate boundedness of the solutions of the system.Then,we obtain conditions for the asymptotic stability of the trivial solution and the positive periodic solution.Finally,numerical simulations are presented to validate our results.Our results show that age-selective harvesting is more conducive to sustainable population survival than non-age-selective harvesting.
基金Supported by NSFC(No.12101482)the Natural Science Foundation of Shaanxi Province,China(No.2018JQ1052)。
文摘In this paper,we consider the fourth-order parabolic equation with p(x)Laplacian and variable exponent source ut+∆^(2)u−div(|■u|^(p(x)−2■u))=|u|^(q(x))−1u.By applying potential well method,we obtain global existence,asymptotic behavior and blow-up of solutions with initial energy J(u_(0))≤d.Moreover,we estimate the upper bound of the blow-up time for J(u_(0))≤0.
基金Supported by the National Natural Science Foundation of China(12001074)the Research Innovation Program of Graduate Students in Hunan Province(CX20220258)+1 种基金the Research Innovation Program of Graduate Students of Central South University(1053320214147)the Key Scientific Research Project of Higher Education Institutions in Henan Province(25B110025)。
文摘This paper deals with Mckean-Vlasov backward stochastic differential equations with weak monotonicity coefficients.We first establish the existence and uniqueness of solutions to Mckean-Vlasov backward stochastic differential equations.Then we obtain a comparison theorem in one-dimensional situation.