The spoke as a key component has a significant impact on the performance of the non-pneumatic tire(NPT).The current research has focused on adjusting spoke structures to improve the single performance of NPT.Few studi...The spoke as a key component has a significant impact on the performance of the non-pneumatic tire(NPT).The current research has focused on adjusting spoke structures to improve the single performance of NPT.Few studies have been conducted to synergistically improve multi-performance by optimizing the spoke structure.Inspired by the concept of functionally gradient structures,this paper introduces a functionally gradient honeycomb NPT and its optimization method.Firstly,this paper completes the parameterization of the honeycomb spoke structure and establishes the numerical models of honeycomb NPTs with seven different gradients.Subsequently,the accuracy of the numerical models is verified using experimental methods.Then,the static and dynamic characteristics of these gradient honeycomb NPTs are thoroughly examined by using the finite element method.The findings highlight that the gradient structure of NPT-3 has superior performance.Building upon this,the study investigates the effects of key parameters,such as honeycomb spoke thickness and length,on load-carrying capacity,honeycomb spoke stress and mass.Finally,a multi-objective optimization method is proposed that uses a response surface model(RSM)and the Nondominated Sorting Genetic Algorithm-II(NSGA-II)to further optimize the functional gradient honeycomb NPTs.The optimized NPT-OP shows a 23.48%reduction in radial stiffness,8.95%reduction in maximum spoke stress and 16.86%reduction in spoke mass compared to the initial NPT-1.The damping characteristics of the NPT-OP have also been improved.The results offer a theoretical foundation and technical methodology for the structural design and optimization of gradient honeycomb NPTs.展开更多
Large-scale multi-objective optimization problems(LSMOPs)pose challenges to existing optimizers since a set of well-converged and diverse solutions should be found in huge search spaces.While evolutionary algorithms a...Large-scale multi-objective optimization problems(LSMOPs)pose challenges to existing optimizers since a set of well-converged and diverse solutions should be found in huge search spaces.While evolutionary algorithms are good at solving small-scale multi-objective optimization problems,they are criticized for low efficiency in converging to the optimums of LSMOPs.By contrast,mathematical programming methods offer fast convergence speed on large-scale single-objective optimization problems,but they have difficulties in finding diverse solutions for LSMOPs.Currently,how to integrate evolutionary algorithms with mathematical programming methods to solve LSMOPs remains unexplored.In this paper,a hybrid algorithm is tailored for LSMOPs by coupling differential evolution and a conjugate gradient method.On the one hand,conjugate gradients and differential evolution are used to update different decision variables of a set of solutions,where the former drives the solutions to quickly converge towards the Pareto front and the latter promotes the diversity of the solutions to cover the whole Pareto front.On the other hand,objective decomposition strategy of evolutionary multi-objective optimization is used to differentiate the conjugate gradients of solutions,and the line search strategy of mathematical programming is used to ensure the higher quality of each offspring than its parent.In comparison with state-of-the-art evolutionary algorithms,mathematical programming methods,and hybrid algorithms,the proposed algorithm exhibits better convergence and diversity performance on a variety of benchmark and real-world LSMOPs.展开更多
Deep deterministic policy gradient(DDPG)has been proved to be effective in optimizing particle swarm optimization(PSO),but whether DDPG can optimize multi-objective discrete particle swarm optimization(MODPSO)remains ...Deep deterministic policy gradient(DDPG)has been proved to be effective in optimizing particle swarm optimization(PSO),but whether DDPG can optimize multi-objective discrete particle swarm optimization(MODPSO)remains to be determined.The present work aims to probe into this topic.Experiments showed that the DDPG can not only quickly improve the convergence speed of MODPSO,but also overcome the problem of local optimal solution that MODPSO may suffer.The research findings are of great significance for the theoretical research and application of MODPSO.展开更多
In this paper,we propose a three-term conjugate gradient method for solving unconstrained optimization problems based on the Hestenes-Stiefel(HS)conjugate gradient method and Polak-Ribiere-Polyak(PRP)conjugate gradien...In this paper,we propose a three-term conjugate gradient method for solving unconstrained optimization problems based on the Hestenes-Stiefel(HS)conjugate gradient method and Polak-Ribiere-Polyak(PRP)conjugate gradient method.Under the condition of standard Wolfe line search,the proposed search direction is the descent direction.For general nonlinear functions,the method is globally convergent.Finally,numerical results show that the proposed method is efficient.展开更多
Fluid dynamic research on rectangular and trapezoidal fins is aimed at increasing heat transfer by means of large surfaces.The trapezoidal cavity form is compared with its thermal and flow performance,and it is reveal...Fluid dynamic research on rectangular and trapezoidal fins is aimed at increasing heat transfer by means of large surfaces.The trapezoidal cavity form is compared with its thermal and flow performance,and it is revealed that trapezoidal fins tend to be more efficient,particularly when material optimization is critical.Motivated by the increasing need for sustainable energy management,this work analyses the thermal performance of inclined trapezoidal and rectangular porous fins utilising a unique hybrid nanofluid.The effectiveness of nanoparticles in a working fluid is primarily determined by their thermophysical properties;hence,optimising these properties can significantly improve overall performance.This study considers the dispersion of Graphene Oxide(GO)and Molybdenum Disulfide in the base fluid,engine oil.Temperature profiles are analysed by altering the radiative,porosity,wet porous,and angle of inclination parameters.Surface and contour plots are constructed by using the Lobatto IIIa Collocation Method with BVP5C solver in MATLAB and Gradient Descent Optimisation to predict the combined heat transfer rate.According to the study,fluid temperature consistently decreases when the angle of inclination,wet porous parameter,porosity parameter,and radiative parameter increase,suggesting significantly improved heat dissipation.The trapezoidal fin consistently exhibits a superior heat transfer mechanism than a rectangular fin.It is found that the trapezoidal fin transmits heat at a rate that is 0.05%higher than that of the rectangular fin.Validation of the present study is done through the comparison of previous studies.This research provides useful design insights for sophisticated engineering uses,including electrical cooling devices,heat exchangers,radiators,and solar heaters.展开更多
Drop-on-demand (DOD) bioprinting has been widely used in tissue engineering due to its highthroughput efficiency and cost effectiveness. However, this type of bioprinting involves challenges such as satellite generati...Drop-on-demand (DOD) bioprinting has been widely used in tissue engineering due to its highthroughput efficiency and cost effectiveness. However, this type of bioprinting involves challenges such as satellite generation, too-large droplet generation, and too-low droplet speed. These challenges reduce the stability and precision of DOD printing, disorder cell arrays, and hence generate further structural errors. In this paper, a multi-objective optimization (MOO) design method for DOD printing parameters through fully connected neural networks (FCNNs) is proposed in order to solve these challenges. The MOO problem comprises two objective functions: to develop the satellite formation model with FCNNs;and to decrease droplet diameter and increase droplet speed. A hybrid multi-subgradient descent bundle method with an adaptive learning rate algorithm (HMSGDBA), which combines the multisubgradient descent bundle (MSGDB) method with Adam algorithm, is introduced in order to search for the Pareto-optimal set for the MOO problem. The superiority of HMSGDBA is demonstrated through comparative studies with the MSGDB method. The experimental results show that a single droplet can be printed stably and the droplet speed can be increased from 0.88 to 2.08 m·s^-1 after optimization with the proposed method. The proposed method can improve both printing precision and stability, and is useful in realizing precise cell arrays and complex biological functions. Furthermore, it can be used to obtain guidelines for the setup of cell-printing experimental platforms.展开更多
In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ...In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ensures the Zoutendijk condition to be held, this method is proved to be globally convergent. Finally, we improve it, and do further analysis.展开更多
The development of artificial intelligence for science has led to the emergence of learning-based research paradigms,necessitating a compelling reevaluation of the design of multi-objective optimization(MOO)methods.Th...The development of artificial intelligence for science has led to the emergence of learning-based research paradigms,necessitating a compelling reevaluation of the design of multi-objective optimization(MOO)methods.The new generation MOO methods should be rooted in automated learning rather than manual design.In this paper,we introduce a new automatic learning paradigm for optimizing MOO problems,and propose a multi-gradient learning to optimize(ML2O)method,which automatically learns a generator(or mappings)from multiple gradients to update directions.As a learning-based method,ML2O acquires knowledge of local landscapes by leveraging information from the current step and incorporates global experience extracted from historical iteration trajectory data.By introducing a new guarding mechanism,we propose a guarded multi-gradient learning to optimize(GML2O)method,and prove that the iterative sequence generated by GML2O converges to a Pareto stationary point.The experimental results demonstrate that our learned optimizer outperforms hand-designed competitors on training the multi-task learning neural network.展开更多
The intelligent optimization of a multi-objective evolutionary algorithm is combined with a gradient algorithm. The hybrid multi-objective gradient algorithm is framed by the real number. Test functions are used to an...The intelligent optimization of a multi-objective evolutionary algorithm is combined with a gradient algorithm. The hybrid multi-objective gradient algorithm is framed by the real number. Test functions are used to analyze the efficiency of the algorithm. In the simulation case of the water phantom, the algorithm is applied to an inverse planning process of intensity modulated radiation treatment (IMRT). The objective functions of planning target volume (PTV) and normal tissue (NT) are based on the average dose distribution. The obtained intensity profile shows that the hybrid multi-objective gradient algorithm saves the computational time and has good accuracy, thus meeting the requirements of practical applications.展开更多
In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wol...In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wolfe line search conditions.Numerical results show that the new method is efficient and stationary by comparing with PRP+ method,so it can be widely used in scientific computation.展开更多
This paper proposes a distributed continuous-time momentum gradient descent(MGD)algorithm for convex optimization over multi-agent networks,where agents collaboratively minimize the sum of local convex cost functions ...This paper proposes a distributed continuous-time momentum gradient descent(MGD)algorithm for convex optimization over multi-agent networks,where agents collaboratively minimize the sum of local convex cost functions through coordinated communication.First,we establish exponential convergence under ideal continuous-time coordination through Lyapunov analysis.To bridge the gap between theoretical designs and digital implementations,two strategies are developed:(1)a time-triggered control(TTC)scheme that guarantees stability under bounded sampling intervals;(2)a periodic event-triggered control(PETC)strategy.Notably,the PETC strategy is introduced to address the inefficiency in network resource utilization inherent in TTC by activating communication only when necessary.By formulating the PETC-based algorithm as a hybrid dynamical system with event-driven thresholds,we subsequently construct a parameterized hybrid Lyapunov function to rigorously prove the global asymptotic stability of the equilibrium point.Comprehensive numerical experiments confirm the convergence of the algorithm under both strategies,with PETC achieving a reduction in communication frequency compared to TTC,while maintaining solution accuracy.展开更多
The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of...The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of many machine learning techniques with data parallelism,such as deep learning and federated learning.We propose a distributed primal-dual stochastic gradient descent(SGD)algorithm,suitable for arbitrarily connected communication networks and any smooth(possibly nonconvex)cost functions.We show that the proposed algorithm achieves the linear speedup convergence rate O(1/(√nT))for general nonconvex cost functions and the linear speedup convergence rate O(1/(nT)) when the global cost function satisfies the Polyak-Lojasiewicz(P-L)condition,where T is the total number of iterations.We also show that the output of the proposed algorithm with constant parameters linearly converges to a neighborhood of a global optimum.We demonstrate through numerical experiments the efficiency of our algorithm in comparison with the baseline centralized SGD and recently proposed distributed SGD algorithms.展开更多
The convergence rate of the gradient descent method is considered for unconstrained multi-objective optimization problems (MOP). Under standard assumptions, we prove that the gradient descent method with constant step...The convergence rate of the gradient descent method is considered for unconstrained multi-objective optimization problems (MOP). Under standard assumptions, we prove that the gradient descent method with constant stepsizes converges sublinearly when the objective functions are convex and the convergence rate can be strengthened to be linear if the objective functions are strongly convex. The results are also extended to the gradient descent method with the Armijo line search. Hence, we see that the gradient descent method for MOP enjoys the same convergence properties as those for scalar optimization.展开更多
A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysi...A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysis, it is shown that search directions of the proposed method satisfy the sufficient descent condition, independent of the line search and the objective function convexity. Global convergence of the method is established under an Armijo–type line search condition. Numerical experiments show practical efficiency of the proposed method.展开更多
Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new de...Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new descent gradient method based on the LS method.It can guarantee the sufficient descent property at each iteration and the global convergence under the strong Wolfe line search.Finally,we also present extensive preliminary numerical experiments to show the efficiency of the proposed method by comparing with the famous PRP^+method.展开更多
In the evolving landscape of artificial intelligence and machine learning, the choice of optimization algorithm can significantly impact the success of model training and the accuracy of predictions. This paper embark...In the evolving landscape of artificial intelligence and machine learning, the choice of optimization algorithm can significantly impact the success of model training and the accuracy of predictions. This paper embarks on a rigorous and comprehensive exploration of widely adopted optimization techniques, specifically focusing on their performance when applied to the notoriously challenging Rosenbrock function. As a benchmark problem known for its deceptive curvature and narrow valleys, the Rosenbrock function provides a fertile ground for examining the nuances and intricacies of algorithmic behavior. The study delves into a diverse array of optimization methods, including traditional Gradient Descent, its stochastic variant (SGD), and the more sophisticated Gradient Descent with Momentum. The investigation further extends to adaptive methods like RMSprop, AdaGrad, and the highly regarded Adam optimizer. By meticulously analyzing and visualizing the optimization paths, convergence rates, and gradient norms, this paper uncovers critical insights into the strengths and limitations of each technique. Our findings not only illuminate the intricate dynamics of these algorithms but also offer actionable guidance for their deployment in complex, real-world optimization problems. This comparative analysis promises to intrigue and inspire researchers and practitioners alike, as it reveals the subtle yet profound impacts of algorithmic choices in the quest for optimization excellence.展开更多
In this exposition paper we present the optimal transport problem of Monge-Ampère-Kantorovitch(MAK in short)and its approximative entropical regularization.Contrary to the MAK optimal transport problem,the soluti...In this exposition paper we present the optimal transport problem of Monge-Ampère-Kantorovitch(MAK in short)and its approximative entropical regularization.Contrary to the MAK optimal transport problem,the solution of the entropical optimal transport problem is always unique,and is characterized by the Schrödinger system.The relationship between the Schrödinger system,the associated Bernstein process and the optimal transport was developed by Léonard[32,33](and by Mikami[39]earlier via an h-process).We present Sinkhorn’s algorithm for solving the Schrödinger system and the recent results on its convergence rate.We study the gradient descent algorithm based on the dual optimal question and prove its exponential convergence,whose rate might be independent of the regularization constant.This exposition is motivated by recent applications of optimal transport to different domains such as machine learning,image processing,econometrics,astrophysics etc..展开更多
基金Supported by National Natural Science Foundation of China(Grant Nos.52072156,52272366)Postdoctoral Foundation of China(Grant No.2020M682269).
文摘The spoke as a key component has a significant impact on the performance of the non-pneumatic tire(NPT).The current research has focused on adjusting spoke structures to improve the single performance of NPT.Few studies have been conducted to synergistically improve multi-performance by optimizing the spoke structure.Inspired by the concept of functionally gradient structures,this paper introduces a functionally gradient honeycomb NPT and its optimization method.Firstly,this paper completes the parameterization of the honeycomb spoke structure and establishes the numerical models of honeycomb NPTs with seven different gradients.Subsequently,the accuracy of the numerical models is verified using experimental methods.Then,the static and dynamic characteristics of these gradient honeycomb NPTs are thoroughly examined by using the finite element method.The findings highlight that the gradient structure of NPT-3 has superior performance.Building upon this,the study investigates the effects of key parameters,such as honeycomb spoke thickness and length,on load-carrying capacity,honeycomb spoke stress and mass.Finally,a multi-objective optimization method is proposed that uses a response surface model(RSM)and the Nondominated Sorting Genetic Algorithm-II(NSGA-II)to further optimize the functional gradient honeycomb NPTs.The optimized NPT-OP shows a 23.48%reduction in radial stiffness,8.95%reduction in maximum spoke stress and 16.86%reduction in spoke mass compared to the initial NPT-1.The damping characteristics of the NPT-OP have also been improved.The results offer a theoretical foundation and technical methodology for the structural design and optimization of gradient honeycomb NPTs.
基金supported in part by the National Key Research and Development Program of China(2018AAA0100100)the National Natural Science Foundation of China(61906001,62136008,U21A20512)+1 种基金the Key Program of Natural Science Project of Educational Commission of Anhui Province(KJ2020A0036)Alexander von Humboldt Professorship for Artificial Intelligence Funded by the Federal Ministry of Education and Research,Germany。
文摘Large-scale multi-objective optimization problems(LSMOPs)pose challenges to existing optimizers since a set of well-converged and diverse solutions should be found in huge search spaces.While evolutionary algorithms are good at solving small-scale multi-objective optimization problems,they are criticized for low efficiency in converging to the optimums of LSMOPs.By contrast,mathematical programming methods offer fast convergence speed on large-scale single-objective optimization problems,but they have difficulties in finding diverse solutions for LSMOPs.Currently,how to integrate evolutionary algorithms with mathematical programming methods to solve LSMOPs remains unexplored.In this paper,a hybrid algorithm is tailored for LSMOPs by coupling differential evolution and a conjugate gradient method.On the one hand,conjugate gradients and differential evolution are used to update different decision variables of a set of solutions,where the former drives the solutions to quickly converge towards the Pareto front and the latter promotes the diversity of the solutions to cover the whole Pareto front.On the other hand,objective decomposition strategy of evolutionary multi-objective optimization is used to differentiate the conjugate gradients of solutions,and the line search strategy of mathematical programming is used to ensure the higher quality of each offspring than its parent.In comparison with state-of-the-art evolutionary algorithms,mathematical programming methods,and hybrid algorithms,the proposed algorithm exhibits better convergence and diversity performance on a variety of benchmark and real-world LSMOPs.
文摘Deep deterministic policy gradient(DDPG)has been proved to be effective in optimizing particle swarm optimization(PSO),but whether DDPG can optimize multi-objective discrete particle swarm optimization(MODPSO)remains to be determined.The present work aims to probe into this topic.Experiments showed that the DDPG can not only quickly improve the convergence speed of MODPSO,but also overcome the problem of local optimal solution that MODPSO may suffer.The research findings are of great significance for the theoretical research and application of MODPSO.
基金Supported by the Science and Technology Project of Guangxi(Guike AD23023002)。
文摘In this paper,we propose a three-term conjugate gradient method for solving unconstrained optimization problems based on the Hestenes-Stiefel(HS)conjugate gradient method and Polak-Ribiere-Polyak(PRP)conjugate gradient method.Under the condition of standard Wolfe line search,the proposed search direction is the descent direction.For general nonlinear functions,the method is globally convergent.Finally,numerical results show that the proposed method is efficient.
基金supported by the“Regional Innovation System&Education(RISE)”through the Seoul RISE Center,funded by the Ministry of Education(MOE)and the Seoul Metropolitan Government(2025-RISE-01-027-04).
文摘Fluid dynamic research on rectangular and trapezoidal fins is aimed at increasing heat transfer by means of large surfaces.The trapezoidal cavity form is compared with its thermal and flow performance,and it is revealed that trapezoidal fins tend to be more efficient,particularly when material optimization is critical.Motivated by the increasing need for sustainable energy management,this work analyses the thermal performance of inclined trapezoidal and rectangular porous fins utilising a unique hybrid nanofluid.The effectiveness of nanoparticles in a working fluid is primarily determined by their thermophysical properties;hence,optimising these properties can significantly improve overall performance.This study considers the dispersion of Graphene Oxide(GO)and Molybdenum Disulfide in the base fluid,engine oil.Temperature profiles are analysed by altering the radiative,porosity,wet porous,and angle of inclination parameters.Surface and contour plots are constructed by using the Lobatto IIIa Collocation Method with BVP5C solver in MATLAB and Gradient Descent Optimisation to predict the combined heat transfer rate.According to the study,fluid temperature consistently decreases when the angle of inclination,wet porous parameter,porosity parameter,and radiative parameter increase,suggesting significantly improved heat dissipation.The trapezoidal fin consistently exhibits a superior heat transfer mechanism than a rectangular fin.It is found that the trapezoidal fin transmits heat at a rate that is 0.05%higher than that of the rectangular fin.Validation of the present study is done through the comparison of previous studies.This research provides useful design insights for sophisticated engineering uses,including electrical cooling devices,heat exchangers,radiators,and solar heaters.
文摘Drop-on-demand (DOD) bioprinting has been widely used in tissue engineering due to its highthroughput efficiency and cost effectiveness. However, this type of bioprinting involves challenges such as satellite generation, too-large droplet generation, and too-low droplet speed. These challenges reduce the stability and precision of DOD printing, disorder cell arrays, and hence generate further structural errors. In this paper, a multi-objective optimization (MOO) design method for DOD printing parameters through fully connected neural networks (FCNNs) is proposed in order to solve these challenges. The MOO problem comprises two objective functions: to develop the satellite formation model with FCNNs;and to decrease droplet diameter and increase droplet speed. A hybrid multi-subgradient descent bundle method with an adaptive learning rate algorithm (HMSGDBA), which combines the multisubgradient descent bundle (MSGDB) method with Adam algorithm, is introduced in order to search for the Pareto-optimal set for the MOO problem. The superiority of HMSGDBA is demonstrated through comparative studies with the MSGDB method. The experimental results show that a single droplet can be printed stably and the droplet speed can be increased from 0.88 to 2.08 m·s^-1 after optimization with the proposed method. The proposed method can improve both printing precision and stability, and is useful in realizing precise cell arrays and complex biological functions. Furthermore, it can be used to obtain guidelines for the setup of cell-printing experimental platforms.
文摘In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ensures the Zoutendijk condition to be held, this method is proved to be globally convergent. Finally, we improve it, and do further analysis.
基金supported by the Major Program of National Natural Science Foundation of China(Grant Nos.11991020 and 11991024)National Natural Science Foundation of China(Grant Nos.11971084and 12171060)+4 种基金National Natural Science Foundation of China and Hong Kong Research Grants Council Joint Research Program(Grant No.12261160365)the Team Project of Innovation Leading Talent in Chongqing(Grant No.CQYC20210309536)the Natural Science Foundation of Chongqing of China(Grant No.CSTB2024NSCQLZX0140)the Major Project of Science and Technology Research Rrogram of Chongqing Education Commission of China(Grant No.KJZD-M202300504)the Foundation of Chongqing Normal University(Grant Nos.22XLB005 and 22XLB006)。
文摘The development of artificial intelligence for science has led to the emergence of learning-based research paradigms,necessitating a compelling reevaluation of the design of multi-objective optimization(MOO)methods.The new generation MOO methods should be rooted in automated learning rather than manual design.In this paper,we introduce a new automatic learning paradigm for optimizing MOO problems,and propose a multi-gradient learning to optimize(ML2O)method,which automatically learns a generator(or mappings)from multiple gradients to update directions.As a learning-based method,ML2O acquires knowledge of local landscapes by leveraging information from the current step and incorporates global experience extracted from historical iteration trajectory data.By introducing a new guarding mechanism,we propose a guarded multi-gradient learning to optimize(GML2O)method,and prove that the iterative sequence generated by GML2O converges to a Pareto stationary point.The experimental results demonstrate that our learned optimizer outperforms hand-designed competitors on training the multi-task learning neural network.
基金Supported by the National Basic Research Program of China ("973" Program)the National Natural Science Foundation of China (60872112, 10805012)+1 种基金the Natural Science Foundation of Zhejiang Province(Z207588)the College Science Research Project of Anhui Province (KJ2008B268)~~
文摘The intelligent optimization of a multi-objective evolutionary algorithm is combined with a gradient algorithm. The hybrid multi-objective gradient algorithm is framed by the real number. Test functions are used to analyze the efficiency of the algorithm. In the simulation case of the water phantom, the algorithm is applied to an inverse planning process of intensity modulated radiation treatment (IMRT). The objective functions of planning target volume (PTV) and normal tissue (NT) are based on the average dose distribution. The obtained intensity profile shows that the hybrid multi-objective gradient algorithm saves the computational time and has good accuracy, thus meeting the requirements of practical applications.
基金Supported by the Fund of Chongqing Education Committee(KJ091104)
文摘In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wolfe line search conditions.Numerical results show that the new method is efficient and stationary by comparing with PRP+ method,so it can be widely used in scientific computation.
基金supported by the National Natural Science Foundation of China(Grant No.08120005)。
文摘This paper proposes a distributed continuous-time momentum gradient descent(MGD)algorithm for convex optimization over multi-agent networks,where agents collaboratively minimize the sum of local convex cost functions through coordinated communication.First,we establish exponential convergence under ideal continuous-time coordination through Lyapunov analysis.To bridge the gap between theoretical designs and digital implementations,two strategies are developed:(1)a time-triggered control(TTC)scheme that guarantees stability under bounded sampling intervals;(2)a periodic event-triggered control(PETC)strategy.Notably,the PETC strategy is introduced to address the inefficiency in network resource utilization inherent in TTC by activating communication only when necessary.By formulating the PETC-based algorithm as a hybrid dynamical system with event-driven thresholds,we subsequently construct a parameterized hybrid Lyapunov function to rigorously prove the global asymptotic stability of the equilibrium point.Comprehensive numerical experiments confirm the convergence of the algorithm under both strategies,with PETC achieving a reduction in communication frequency compared to TTC,while maintaining solution accuracy.
基金supported by the Knut and Alice Wallenberg Foundationthe Swedish Foundation for Strategic Research+1 种基金the Swedish Research Councilthe National Natural Science Foundation of China(62133003,61991403,61991404,61991400)。
文摘The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of many machine learning techniques with data parallelism,such as deep learning and federated learning.We propose a distributed primal-dual stochastic gradient descent(SGD)algorithm,suitable for arbitrarily connected communication networks and any smooth(possibly nonconvex)cost functions.We show that the proposed algorithm achieves the linear speedup convergence rate O(1/(√nT))for general nonconvex cost functions and the linear speedup convergence rate O(1/(nT)) when the global cost function satisfies the Polyak-Lojasiewicz(P-L)condition,where T is the total number of iterations.We also show that the output of the proposed algorithm with constant parameters linearly converges to a neighborhood of a global optimum.We demonstrate through numerical experiments the efficiency of our algorithm in comparison with the baseline centralized SGD and recently proposed distributed SGD algorithms.
基金The authors are grateful for the valuable comments and suggestions of two anonymous refereesThe authors also would like to thank Dr. Hui Zhang in National University of Defense Technology for his many suggestions and comments on an early draft of this paper+1 种基金This research is supported by the Chinese Natural Science Foundation (Nos. 11631013, 11971372)the National 973 Program of China (Nos. 2015CB856002).
文摘The convergence rate of the gradient descent method is considered for unconstrained multi-objective optimization problems (MOP). Under standard assumptions, we prove that the gradient descent method with constant stepsizes converges sublinearly when the objective functions are convex and the convergence rate can be strengthened to be linear if the objective functions are strongly convex. The results are also extended to the gradient descent method with the Armijo line search. Hence, we see that the gradient descent method for MOP enjoys the same convergence properties as those for scalar optimization.
基金Supported by Research Council of Semnan University
文摘A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysis, it is shown that search directions of the proposed method satisfy the sufficient descent condition, independent of the line search and the objective function convexity. Global convergence of the method is established under an Armijo–type line search condition. Numerical experiments show practical efficiency of the proposed method.
基金Supported by The Youth Project Foundation of Chongqing Three Gorges University(13QN17)Supported by the Fund of Scientific Research in Southeast University(the Support Project of Fundamental Research)
文摘Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new descent gradient method based on the LS method.It can guarantee the sufficient descent property at each iteration and the global convergence under the strong Wolfe line search.Finally,we also present extensive preliminary numerical experiments to show the efficiency of the proposed method by comparing with the famous PRP^+method.
文摘In the evolving landscape of artificial intelligence and machine learning, the choice of optimization algorithm can significantly impact the success of model training and the accuracy of predictions. This paper embarks on a rigorous and comprehensive exploration of widely adopted optimization techniques, specifically focusing on their performance when applied to the notoriously challenging Rosenbrock function. As a benchmark problem known for its deceptive curvature and narrow valleys, the Rosenbrock function provides a fertile ground for examining the nuances and intricacies of algorithmic behavior. The study delves into a diverse array of optimization methods, including traditional Gradient Descent, its stochastic variant (SGD), and the more sophisticated Gradient Descent with Momentum. The investigation further extends to adaptive methods like RMSprop, AdaGrad, and the highly regarded Adam optimizer. By meticulously analyzing and visualizing the optimization paths, convergence rates, and gradient norms, this paper uncovers critical insights into the strengths and limitations of each technique. Our findings not only illuminate the intricate dynamics of these algorithms but also offer actionable guidance for their deployment in complex, real-world optimization problems. This comparative analysis promises to intrigue and inspire researchers and practitioners alike, as it reveals the subtle yet profound impacts of algorithmic choices in the quest for optimization excellence.
文摘In this exposition paper we present the optimal transport problem of Monge-Ampère-Kantorovitch(MAK in short)and its approximative entropical regularization.Contrary to the MAK optimal transport problem,the solution of the entropical optimal transport problem is always unique,and is characterized by the Schrödinger system.The relationship between the Schrödinger system,the associated Bernstein process and the optimal transport was developed by Léonard[32,33](and by Mikami[39]earlier via an h-process).We present Sinkhorn’s algorithm for solving the Schrödinger system and the recent results on its convergence rate.We study the gradient descent algorithm based on the dual optimal question and prove its exponential convergence,whose rate might be independent of the regularization constant.This exposition is motivated by recent applications of optimal transport to different domains such as machine learning,image processing,econometrics,astrophysics etc..