期刊文献+
共找到272篇文章
< 1 2 14 >
每页显示 20 50 100
Performance Analysis and Multi-Objective Optimization of Functional Gradient Honeycomb Non-pneumatic Tires
1
作者 Haichao Zhou Haifeng Zhou +2 位作者 Haoze Ren Zhou Zheng Guolin Wang 《Chinese Journal of Mechanical Engineering》 2025年第3期412-431,共20页
The spoke as a key component has a significant impact on the performance of the non-pneumatic tire(NPT).The current research has focused on adjusting spoke structures to improve the single performance of NPT.Few studi... The spoke as a key component has a significant impact on the performance of the non-pneumatic tire(NPT).The current research has focused on adjusting spoke structures to improve the single performance of NPT.Few studies have been conducted to synergistically improve multi-performance by optimizing the spoke structure.Inspired by the concept of functionally gradient structures,this paper introduces a functionally gradient honeycomb NPT and its optimization method.Firstly,this paper completes the parameterization of the honeycomb spoke structure and establishes the numerical models of honeycomb NPTs with seven different gradients.Subsequently,the accuracy of the numerical models is verified using experimental methods.Then,the static and dynamic characteristics of these gradient honeycomb NPTs are thoroughly examined by using the finite element method.The findings highlight that the gradient structure of NPT-3 has superior performance.Building upon this,the study investigates the effects of key parameters,such as honeycomb spoke thickness and length,on load-carrying capacity,honeycomb spoke stress and mass.Finally,a multi-objective optimization method is proposed that uses a response surface model(RSM)and the Nondominated Sorting Genetic Algorithm-II(NSGA-II)to further optimize the functional gradient honeycomb NPTs.The optimized NPT-OP shows a 23.48%reduction in radial stiffness,8.95%reduction in maximum spoke stress and 16.86%reduction in spoke mass compared to the initial NPT-1.The damping characteristics of the NPT-OP have also been improved.The results offer a theoretical foundation and technical methodology for the structural design and optimization of gradient honeycomb NPTs. 展开更多
关键词 Non-pneumatic tires Honeycomb structure gradient structure multi-objective optimization
在线阅读 下载PDF
A Modified PRP-HS Hybrid Conjugate Gradient Algorithm for Solving Unconstrained Optimization Problems
2
作者 LI Xiangli WANG Zhiling LI Binglan 《应用数学》 北大核心 2025年第2期553-564,共12页
In this paper,we propose a three-term conjugate gradient method for solving unconstrained optimization problems based on the Hestenes-Stiefel(HS)conjugate gradient method and Polak-Ribiere-Polyak(PRP)conjugate gradien... In this paper,we propose a three-term conjugate gradient method for solving unconstrained optimization problems based on the Hestenes-Stiefel(HS)conjugate gradient method and Polak-Ribiere-Polyak(PRP)conjugate gradient method.Under the condition of standard Wolfe line search,the proposed search direction is the descent direction.For general nonlinear functions,the method is globally convergent.Finally,numerical results show that the proposed method is efficient. 展开更多
关键词 Conjugate gradient method Unconstrained optimization Sufficient descent condition Global convergence
在线阅读 下载PDF
Integrating Conjugate Gradients Into Evolutionary Algorithms for Large-Scale Continuous Multi-Objective Optimization 被引量:6
3
作者 Ye Tian Haowen Chen +3 位作者 Haiping Ma Xingyi Zhang Kay Chen Tan Yaochu Jin 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第10期1801-1817,共17页
Large-scale multi-objective optimization problems(LSMOPs)pose challenges to existing optimizers since a set of well-converged and diverse solutions should be found in huge search spaces.While evolutionary algorithms a... Large-scale multi-objective optimization problems(LSMOPs)pose challenges to existing optimizers since a set of well-converged and diverse solutions should be found in huge search spaces.While evolutionary algorithms are good at solving small-scale multi-objective optimization problems,they are criticized for low efficiency in converging to the optimums of LSMOPs.By contrast,mathematical programming methods offer fast convergence speed on large-scale single-objective optimization problems,but they have difficulties in finding diverse solutions for LSMOPs.Currently,how to integrate evolutionary algorithms with mathematical programming methods to solve LSMOPs remains unexplored.In this paper,a hybrid algorithm is tailored for LSMOPs by coupling differential evolution and a conjugate gradient method.On the one hand,conjugate gradients and differential evolution are used to update different decision variables of a set of solutions,where the former drives the solutions to quickly converge towards the Pareto front and the latter promotes the diversity of the solutions to cover the whole Pareto front.On the other hand,objective decomposition strategy of evolutionary multi-objective optimization is used to differentiate the conjugate gradients of solutions,and the line search strategy of mathematical programming is used to ensure the higher quality of each offspring than its parent.In comparison with state-of-the-art evolutionary algorithms,mathematical programming methods,and hybrid algorithms,the proposed algorithm exhibits better convergence and diversity performance on a variety of benchmark and real-world LSMOPs. 展开更多
关键词 Conjugate gradient differential evolution evolutionary computation large-scale multi-objective optimization mathematical programming
在线阅读 下载PDF
Optimizing the Multi-Objective Discrete Particle Swarm Optimization Algorithm by Deep Deterministic Policy Gradient Algorithm
4
作者 Sun Yang-Yang Yao Jun-Ping +2 位作者 Li Xiao-Jun Fan Shou-Xiang Wang Zi-Wei 《Journal on Artificial Intelligence》 2022年第1期27-35,共9页
Deep deterministic policy gradient(DDPG)has been proved to be effective in optimizing particle swarm optimization(PSO),but whether DDPG can optimize multi-objective discrete particle swarm optimization(MODPSO)remains ... Deep deterministic policy gradient(DDPG)has been proved to be effective in optimizing particle swarm optimization(PSO),but whether DDPG can optimize multi-objective discrete particle swarm optimization(MODPSO)remains to be determined.The present work aims to probe into this topic.Experiments showed that the DDPG can not only quickly improve the convergence speed of MODPSO,but also overcome the problem of local optimal solution that MODPSO may suffer.The research findings are of great significance for the theoretical research and application of MODPSO. 展开更多
关键词 Deep deterministic policy gradient multi-objective discrete particle swarm optimization deep reinforcement learning machine learning
在线阅读 下载PDF
Multi-Objective Optimization Design through Machine Learning for Drop-on-Demand Bioprinting 被引量:6
5
作者 Jia Shi Jinchun Song +1 位作者 Bin Song Wen F. Lu 《Engineering》 SCIE EI 2019年第3期586-593,共8页
Drop-on-demand (DOD) bioprinting has been widely used in tissue engineering due to its highthroughput efficiency and cost effectiveness. However, this type of bioprinting involves challenges such as satellite generati... Drop-on-demand (DOD) bioprinting has been widely used in tissue engineering due to its highthroughput efficiency and cost effectiveness. However, this type of bioprinting involves challenges such as satellite generation, too-large droplet generation, and too-low droplet speed. These challenges reduce the stability and precision of DOD printing, disorder cell arrays, and hence generate further structural errors. In this paper, a multi-objective optimization (MOO) design method for DOD printing parameters through fully connected neural networks (FCNNs) is proposed in order to solve these challenges. The MOO problem comprises two objective functions: to develop the satellite formation model with FCNNs;and to decrease droplet diameter and increase droplet speed. A hybrid multi-subgradient descent bundle method with an adaptive learning rate algorithm (HMSGDBA), which combines the multisubgradient descent bundle (MSGDB) method with Adam algorithm, is introduced in order to search for the Pareto-optimal set for the MOO problem. The superiority of HMSGDBA is demonstrated through comparative studies with the MSGDB method. The experimental results show that a single droplet can be printed stably and the droplet speed can be increased from 0.88 to 2.08 m·s^-1 after optimization with the proposed method. The proposed method can improve both printing precision and stability, and is useful in realizing precise cell arrays and complex biological functions. Furthermore, it can be used to obtain guidelines for the setup of cell-printing experimental platforms. 展开更多
关键词 Drop-on-demand printing INKJET gradient descent multi-objective optimization Fully connected neural networks
在线阅读 下载PDF
A New Descent Nonlinear Conjugate Gradient Method for Unconstrained Optimization
6
作者 Hao Fan Zhibin Zhu Anwa Zhou 《Applied Mathematics》 2011年第9期1119-1123,共5页
In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ... In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ensures the Zoutendijk condition to be held, this method is proved to be globally convergent. Finally, we improve it, and do further analysis. 展开更多
关键词 Large Scale UNCONSTRAINED optimization CONJUGATE gradient Method SUFFICIENT descent Property Globally CONVERGENT
在线阅读 下载PDF
HYBRID MULTI-OBJECTIVE GRADIENT ALGORITHM FOR INVERSE PLANNING OF IMRT
7
作者 李国丽 盛大宁 +3 位作者 王俊椋 景佳 王超 闫冰 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI 2010年第1期97-101,共5页
The intelligent optimization of a multi-objective evolutionary algorithm is combined with a gradient algorithm. The hybrid multi-objective gradient algorithm is framed by the real number. Test functions are used to an... The intelligent optimization of a multi-objective evolutionary algorithm is combined with a gradient algorithm. The hybrid multi-objective gradient algorithm is framed by the real number. Test functions are used to analyze the efficiency of the algorithm. In the simulation case of the water phantom, the algorithm is applied to an inverse planning process of intensity modulated radiation treatment (IMRT). The objective functions of planning target volume (PTV) and normal tissue (NT) are based on the average dose distribution. The obtained intensity profile shows that the hybrid multi-objective gradient algorithm saves the computational time and has good accuracy, thus meeting the requirements of practical applications. 展开更多
关键词 gradient methods inverse planning multi-objective optimization hybrid gradient algorithm
暂未订购
A New Nonlinear Conjugate Gradient Method for Unconstrained Optimization Problems 被引量:1
8
作者 LIU Jin-kui WANG Kai-rong +1 位作者 SONG Xiao-qian DU Xiang-lin 《Chinese Quarterly Journal of Mathematics》 CSCD 2010年第3期444-450,共7页
In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wol... In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wolfe line search conditions.Numerical results show that the new method is efficient and stationary by comparing with PRP+ method,so it can be widely used in scientific computation. 展开更多
关键词 unconstrained optimization conjugate gradient method strong Wolfe line search sufficient descent property global convergence
在线阅读 下载PDF
A Comparative Study of Optimization Techniques on the Rosenbrock Function
9
作者 Lebede Ngartera Coumba Diallo 《Open Journal of Optimization》 2024年第3期51-63,共13页
In the evolving landscape of artificial intelligence and machine learning, the choice of optimization algorithm can significantly impact the success of model training and the accuracy of predictions. This paper embark... In the evolving landscape of artificial intelligence and machine learning, the choice of optimization algorithm can significantly impact the success of model training and the accuracy of predictions. This paper embarks on a rigorous and comprehensive exploration of widely adopted optimization techniques, specifically focusing on their performance when applied to the notoriously challenging Rosenbrock function. As a benchmark problem known for its deceptive curvature and narrow valleys, the Rosenbrock function provides a fertile ground for examining the nuances and intricacies of algorithmic behavior. The study delves into a diverse array of optimization methods, including traditional Gradient Descent, its stochastic variant (SGD), and the more sophisticated Gradient Descent with Momentum. The investigation further extends to adaptive methods like RMSprop, AdaGrad, and the highly regarded Adam optimizer. By meticulously analyzing and visualizing the optimization paths, convergence rates, and gradient norms, this paper uncovers critical insights into the strengths and limitations of each technique. Our findings not only illuminate the intricate dynamics of these algorithms but also offer actionable guidance for their deployment in complex, real-world optimization problems. This comparative analysis promises to intrigue and inspire researchers and practitioners alike, as it reveals the subtle yet profound impacts of algorithmic choices in the quest for optimization excellence. 展开更多
关键词 Machine Learning optimization Algorithm Rosenbrock Function gradient descent
在线阅读 下载PDF
A Primal-Dual SGD Algorithm for Distributed Nonconvex Optimization 被引量:7
10
作者 Xinlei Yi Shengjun Zhang +2 位作者 Tao Yang Tianyou Chai Karl Henrik Johansson 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第5期812-833,共22页
The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of... The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of many machine learning techniques with data parallelism,such as deep learning and federated learning.We propose a distributed primal-dual stochastic gradient descent(SGD)algorithm,suitable for arbitrarily connected communication networks and any smooth(possibly nonconvex)cost functions.We show that the proposed algorithm achieves the linear speedup convergence rate O(1/(√nT))for general nonconvex cost functions and the linear speedup convergence rate O(1/(nT)) when the global cost function satisfies the Polyak-Lojasiewicz(P-L)condition,where T is the total number of iterations.We also show that the output of the proposed algorithm with constant parameters linearly converges to a neighborhood of a global optimum.We demonstrate through numerical experiments the efficiency of our algorithm in comparison with the baseline centralized SGD and recently proposed distributed SGD algorithms. 展开更多
关键词 Distributed nonconvex optimization linear speedup Polyak-Lojasiewicz(P-L)condition primal-dual algorithm stochastic gradient descent
在线阅读 下载PDF
CONVERGENCE RATE OF GRADIENT DESCENT METHOD FOR MULTI-OBJECTIVE OPTIMIZATION 被引量:1
11
作者 Liaoyuan Zeng Yuhong Dai Yakui Huang 《Journal of Computational Mathematics》 SCIE CSCD 2019年第5期689-703,共15页
The convergence rate of the gradient descent method is considered for unconstrained multi-objective optimization problems (MOP). Under standard assumptions, we prove that the gradient descent method with constant step... The convergence rate of the gradient descent method is considered for unconstrained multi-objective optimization problems (MOP). Under standard assumptions, we prove that the gradient descent method with constant stepsizes converges sublinearly when the objective functions are convex and the convergence rate can be strengthened to be linear if the objective functions are strongly convex. The results are also extended to the gradient descent method with the Armijo line search. Hence, we see that the gradient descent method for MOP enjoys the same convergence properties as those for scalar optimization. 展开更多
关键词 multi-objective optimization gradient descent CONVERGENCE rate.
原文传递
A modified three–term conjugate gradient method with sufficient descent property 被引量:1
12
作者 Saman Babaie–Kafaki 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2015年第3期263-272,共10页
A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysi... A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysis, it is shown that search directions of the proposed method satisfy the sufficient descent condition, independent of the line search and the objective function convexity. Global convergence of the method is established under an Armijo–type line search condition. Numerical experiments show practical efficiency of the proposed method. 展开更多
关键词 unconstrained optimization conjugate gradient method EIGENVALUE sufficient descent condition global convergence
在线阅读 下载PDF
A Descent Gradient Method and Its Global Convergence
13
作者 LIU Jin-kui 《Chinese Quarterly Journal of Mathematics》 CSCD 2014年第1期142-150,共9页
Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new de... Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new descent gradient method based on the LS method.It can guarantee the sufficient descent property at each iteration and the global convergence under the strong Wolfe line search.Finally,we also present extensive preliminary numerical experiments to show the efficiency of the proposed method by comparing with the famous PRP^+method. 展开更多
关键词 unconstrained optimization conjugate gradient method strong Wolfe line search sufficient descent property global convergence
在线阅读 下载PDF
计算机网络中基于集成式图卷积神经网络的入侵检测技术
14
作者 范申民 王磊 张芬 《自动化与仪器仪表》 2025年第5期7-11,共5页
为了保障网络环境的安全性,提出了基于集成式图卷积神经网络算法的网络入侵检测技术。研究方法采用随机梯度下降算法和均方根传播(Root Mean Square Propagation,RMSProp)优化器提升了检测模型的训练效率,强化了检测模型的分类效果。研... 为了保障网络环境的安全性,提出了基于集成式图卷积神经网络算法的网络入侵检测技术。研究方法采用随机梯度下降算法和均方根传播(Root Mean Square Propagation,RMSProp)优化器提升了检测模型的训练效率,强化了检测模型的分类效果。研究结果显示,研究模型的入侵检测准确率为96.41%~97.18%。可见经过研究模型优化后,入侵检测技术在模型训练效率和模型训练精度上都有明显提升。研究模型可以根据访问来源进行数据分类,提升了入侵检测模型对访问行为的分类效果。同时,分类效果的提升优化了计算机对攻击行为的识别效率,使计算机的防御效果增强,有效保障了用户的网络安全环境。因此,研究为网络入侵行为的检测提供了一个识别效果较好的技术方法。 展开更多
关键词 集成式图卷积神经网络 网络入侵检测 随机梯度下降 RMSProp优化器
原文传递
基于GD-PSO的水电站地下洞室初始地应力场反演
15
作者 包腾飞 程健悦 +3 位作者 邢钰 周喜武 陈雨婷 赵向宇 《郑州大学学报(工学版)》 北大核心 2025年第5期130-136,共7页
针对现有的初始地应力场反演方法难以平衡收敛速度和非线性回归精度的问题,提出了一种联合梯度下降法(GD)和粒子群优化算法(PSO)的初始地应力场反演分析方法。首先,考虑影响初始地应力场的重力场及5种构造应力场的8种基础边界条件,利用... 针对现有的初始地应力场反演方法难以平衡收敛速度和非线性回归精度的问题,提出了一种联合梯度下降法(GD)和粒子群优化算法(PSO)的初始地应力场反演分析方法。首先,考虑影响初始地应力场的重力场及5种构造应力场的8种基础边界条件,利用有限元软件计算各边界条件下测点应力值;其次,以实测地应力值为目标值,利用GD-PSO算法进行回归分析,得到各边界条件的影响系数;最后,计算模型各点的回归地应力值,并作为初始地应力场输入三维有限元模型进行地应力平衡。实例分析表明:对比使用PSO算法的计算结果,使用GD-PSO算法求得的三次回归多项式精度最高,均方误差为0.579,回归结果与实测地应力值拟合较好,地应力平衡后除竖直方向应力值外,测点地应力值与实测值差值较小,围岩各向位移基本为零,最大位移仅有5.26 mm。 展开更多
关键词 大型抽水蓄能电站 地下洞室群 地应力反演 梯度下降法 粒子群优化算法
在线阅读 下载PDF
基于粒子群-梯度下降混合优化的物镜偏振像差测量研究
16
作者 裴世鑫 郑改革 曹兆楼 《仪器仪表学报》 北大核心 2025年第8期198-205,共8页
受到镀膜及材料双折射、内部应力的影响,高数值孔径光学系统不可避免存在一定的偏振像差,使得系统的成像质量与入射光偏振态相关。现有偏振像差测量技术一般装置较为复杂,测量不便,效率较低。针对此问题,提出了使用往返式光路进行测量,... 受到镀膜及材料双折射、内部应力的影响,高数值孔径光学系统不可避免存在一定的偏振像差,使得系统的成像质量与入射光偏振态相关。现有偏振像差测量技术一般装置较为复杂,测量不便,效率较低。针对此问题,提出了使用往返式光路进行测量,利用偏振分辨波前测量技术从聚焦光场的强度分布反演显微物镜的Jones矩阵,降低系统复杂度。首先,基于光线追迹及标量衍射理论建立了数值计算模型,模拟聚焦光场不同轴向位置处的强度分布;其次,将复振幅反演转化为最优化问题,进而利用粒子群及梯度下降混合优化算法建立了反演模型,通过优化表征偏振像差的Zernike多项式系数使得预测与目标强度分布之间的偏差最小化,实现光学系统Jones矩阵的反演;再次,基于数值模拟测试了给定偏振像差时模型的反演结果,Jones矩阵元素反演结果与目标值符合较好,误差值<10^(-3);最后,实验测量了商用高数值孔径显微物镜的偏振像差,反演了Zernike多项式系数,预测与目标强度分布保持一致。理论与实验结果表明,所提出的算法能够有效获得显微物镜的偏振像差,且具有结构简单、操作方便的特点,有望为高数值孔径光学系统的制造检测提供一个新的技术手段。 展开更多
关键词 偏振像差 相位复原 粒子群优化 梯度下降 并行计算 JONES矩阵
原文传递
基于镜像下降的分布式弱凸优化算法研究
17
作者 程松松 陈茹 +1 位作者 樊渊 邱剑彬 《自动化学报》 北大核心 2025年第8期1842-1856,共15页
机器学习中的诸多非凸优化问题,如鲁棒相位恢复、低秩矩阵补全以及稀疏字典学习等,本质上可归结为弱凸优化问题.然而,弱凸优化问题固有的非凸特性使得此类问题的求解极具挑战.此外,由于系统复杂度和问题规模的增加以及相关参数的分布式... 机器学习中的诸多非凸优化问题,如鲁棒相位恢复、低秩矩阵补全以及稀疏字典学习等,本质上可归结为弱凸优化问题.然而,弱凸优化问题固有的非凸特性使得此类问题的求解极具挑战.此外,由于系统复杂度和问题规模的增加以及相关参数的分布式存储需求,传统的基于单个个体的集中式计算框架难以高效求解此类问题.针对上述挑战,设计一种分布式镜像下降算法,并从Bregman-Moreau包络的角度分析其收敛性,证明算法的收敛速度为O(lnK/√K),其中K为算法的迭代步数.进一步地,考虑目标函数梯度信息难以精确获取的情形,采用正交随机方向矩阵法进行梯度估计.相较于传统的基于随机向量的方法,该方法利用多维方向信息进行估计,从而显著提高梯度信息的估计精度和效率.基于高效的梯度信息估计,提出一种分布式零阶镜像下降算法,并获得与已知精确梯度信息情形下相一致的收敛速度.最后,通过基于相位恢复问题的数值仿真和实验验证了所提出的两种算法的有效性. 展开更多
关键词 分布式优化 弱凸 镜像下降 Bregman-Moreau包络 零阶梯度
在线阅读 下载PDF
基于强化学习策略的梯度下降学习求解GCP
18
作者 宋家欢 王晓峰 +2 位作者 胡思敏 姚佳兴 锁小娜 《计算机应用研究》 北大核心 2025年第4期1011-1017,共7页
图着色问题(graph coloring problem,GCP)是经典的组合优化问题,其目标是为图的每个顶点分配不同的颜色,使得相邻顶点的颜色不同,同时尽可能减少所用颜色的数量。GCP属于NP难问题,传统求解方法(如贪心算法、启发式搜索和进化算法)往往... 图着色问题(graph coloring problem,GCP)是经典的组合优化问题,其目标是为图的每个顶点分配不同的颜色,使得相邻顶点的颜色不同,同时尽可能减少所用颜色的数量。GCP属于NP难问题,传统求解方法(如贪心算法、启发式搜索和进化算法)往往因计算复杂度高而受限,且易陷入局部最优解。为了解决这些问题,提出了一种基于强化学习策略(reinforcement learning strategy,RLS)的梯度下降学习方法来求解GCP。具体而言,将GCP转换为强化学习中的策略优化问题,通过设计策略梯度算法,将图的着色状态映射为强化学习的状态,将颜色分配视为动作,以目标函数的负值作为奖励信号,逐步优化着色策略。实验结果表明,所提方法在不同类型和规模的图实例上均优于传统启发式算法,尤其在高维度和复杂约束条件下表现出较强的全局探索能力和收敛性。该研究表明,基于强化学习的图着色方法为在解决复杂组合优化问题上具有广泛的应用潜力,为图着色及其衍生问题提供了有效的求解新路径。 展开更多
关键词 图着色问题 强化学习策略 梯度下降 组合优化问题
在线阅读 下载PDF
基于改进粒子群的智能汽车最优路径规划方法研究
19
作者 夏佳 郑晏群 +1 位作者 谢秉磊 张鹍鹏 《机械设计与制造》 北大核心 2025年第2期264-268,共5页
为了提高复杂环境下智能车辆路径规划的实时性和安全性,提出了一种结合改进强化学习算法和改进粒子群优化算法的智能车辆路径规划方法。采用小批量梯度下降法优化强化学习算法的衰减参数和学习因子,提高学习效率。通过改进的强化学习算... 为了提高复杂环境下智能车辆路径规划的实时性和安全性,提出了一种结合改进强化学习算法和改进粒子群优化算法的智能车辆路径规划方法。采用小批量梯度下降法优化强化学习算法的衰减参数和学习因子,提高学习效率。通过改进的强化学习算法训练改进的粒子群优化算法,并根据评价指标选择最优路径。通过与传统路径规划方法进行仿真对比,验证了该方法的优越性。结果表明,与传统的路径规划方法相比,随着障碍物比例的增加,这里方法规划的路径最优,路径规划的综合成本最低,为复杂环境下智能车辆的路径规划提供了保障。 展开更多
关键词 智能汽车 路径规划方法 梯度下降法 强化学习算法 粒子群算法
在线阅读 下载PDF
基于梯度下降的火车轴粗削最优加工标记方法
20
作者 刘仁明 张耀 贺文斌 《电子测量与仪器学报》 北大核心 2025年第6期212-220,共9页
火车轴在高温锻造后,表面形状不规则,且在冷却过程中,轴体可能变形,给粗削工序的轴心点定位带来挑战和困难。现有方法,如两点法、光学投影法、回转轴法等,或只考虑端面中心与局部表面,忽略轴体变形与表面不规则形状,存在效率低、无法加... 火车轴在高温锻造后,表面形状不规则,且在冷却过程中,轴体可能变形,给粗削工序的轴心点定位带来挑战和困难。现有方法,如两点法、光学投影法、回转轴法等,或只考虑端面中心与局部表面,忽略轴体变形与表面不规则形状,存在效率低、无法加工、产品损耗大的问题等。针对现有方法存在的问题,提出一种火车轴粗削最优加工的标记方法,首先通过扫描仪扫描得到3D轴体点云,先后对点云进行坐标系转换、对变换点云进行切片、计算初始加工轴;然后分析产品加工CAD(computer aided design)模型在轴体点云空间余量分布情况,采用梯度下降法优化策略来调整加工轴位置;最终计算得到火车轴粗削加工的最优标记点,再通过激光打标机标记在轴体上。该方法使用C++与PCL(point cloud library)混合编码实现,在中车现场实践验证长达一个月,数据统计表明准确性>98%,效率相比人工提升了3~6倍。该方法提高了火车轴粗削工序的生产效率,降低了生产过程的报废率,同时保证车削加工过程中余量充分性和旋转平衡性。 展开更多
关键词 高温锻造 车轴粗削 最优加工 3D点云 CAD模型 梯度下降法 余量充分性 旋转平衡性
原文传递
上一页 1 2 14 下一页 到第
使用帮助 返回顶部