期刊文献+
共找到277篇文章
< 1 2 14 >
每页显示 20 50 100
Performance Analysis and Multi-Objective Optimization of Functional Gradient Honeycomb Non-pneumatic Tires
1
作者 Haichao Zhou Haifeng Zhou +2 位作者 Haoze Ren Zhou Zheng Guolin Wang 《Chinese Journal of Mechanical Engineering》 2025年第3期412-431,共20页
The spoke as a key component has a significant impact on the performance of the non-pneumatic tire(NPT).The current research has focused on adjusting spoke structures to improve the single performance of NPT.Few studi... The spoke as a key component has a significant impact on the performance of the non-pneumatic tire(NPT).The current research has focused on adjusting spoke structures to improve the single performance of NPT.Few studies have been conducted to synergistically improve multi-performance by optimizing the spoke structure.Inspired by the concept of functionally gradient structures,this paper introduces a functionally gradient honeycomb NPT and its optimization method.Firstly,this paper completes the parameterization of the honeycomb spoke structure and establishes the numerical models of honeycomb NPTs with seven different gradients.Subsequently,the accuracy of the numerical models is verified using experimental methods.Then,the static and dynamic characteristics of these gradient honeycomb NPTs are thoroughly examined by using the finite element method.The findings highlight that the gradient structure of NPT-3 has superior performance.Building upon this,the study investigates the effects of key parameters,such as honeycomb spoke thickness and length,on load-carrying capacity,honeycomb spoke stress and mass.Finally,a multi-objective optimization method is proposed that uses a response surface model(RSM)and the Nondominated Sorting Genetic Algorithm-II(NSGA-II)to further optimize the functional gradient honeycomb NPTs.The optimized NPT-OP shows a 23.48%reduction in radial stiffness,8.95%reduction in maximum spoke stress and 16.86%reduction in spoke mass compared to the initial NPT-1.The damping characteristics of the NPT-OP have also been improved.The results offer a theoretical foundation and technical methodology for the structural design and optimization of gradient honeycomb NPTs. 展开更多
关键词 Non-pneumatic tires Honeycomb structure gradient structure multi-objective optimization
在线阅读 下载PDF
Integrating Conjugate Gradients Into Evolutionary Algorithms for Large-Scale Continuous Multi-Objective Optimization 被引量:6
2
作者 Ye Tian Haowen Chen +3 位作者 Haiping Ma Xingyi Zhang Kay Chen Tan Yaochu Jin 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第10期1801-1817,共17页
Large-scale multi-objective optimization problems(LSMOPs)pose challenges to existing optimizers since a set of well-converged and diverse solutions should be found in huge search spaces.While evolutionary algorithms a... Large-scale multi-objective optimization problems(LSMOPs)pose challenges to existing optimizers since a set of well-converged and diverse solutions should be found in huge search spaces.While evolutionary algorithms are good at solving small-scale multi-objective optimization problems,they are criticized for low efficiency in converging to the optimums of LSMOPs.By contrast,mathematical programming methods offer fast convergence speed on large-scale single-objective optimization problems,but they have difficulties in finding diverse solutions for LSMOPs.Currently,how to integrate evolutionary algorithms with mathematical programming methods to solve LSMOPs remains unexplored.In this paper,a hybrid algorithm is tailored for LSMOPs by coupling differential evolution and a conjugate gradient method.On the one hand,conjugate gradients and differential evolution are used to update different decision variables of a set of solutions,where the former drives the solutions to quickly converge towards the Pareto front and the latter promotes the diversity of the solutions to cover the whole Pareto front.On the other hand,objective decomposition strategy of evolutionary multi-objective optimization is used to differentiate the conjugate gradients of solutions,and the line search strategy of mathematical programming is used to ensure the higher quality of each offspring than its parent.In comparison with state-of-the-art evolutionary algorithms,mathematical programming methods,and hybrid algorithms,the proposed algorithm exhibits better convergence and diversity performance on a variety of benchmark and real-world LSMOPs. 展开更多
关键词 Conjugate gradient differential evolution evolutionary computation large-scale multi-objective optimization mathematical programming
在线阅读 下载PDF
Optimizing the Multi-Objective Discrete Particle Swarm Optimization Algorithm by Deep Deterministic Policy Gradient Algorithm
3
作者 Sun Yang-Yang Yao Jun-Ping +2 位作者 Li Xiao-Jun Fan Shou-Xiang Wang Zi-Wei 《Journal on Artificial Intelligence》 2022年第1期27-35,共9页
Deep deterministic policy gradient(DDPG)has been proved to be effective in optimizing particle swarm optimization(PSO),but whether DDPG can optimize multi-objective discrete particle swarm optimization(MODPSO)remains ... Deep deterministic policy gradient(DDPG)has been proved to be effective in optimizing particle swarm optimization(PSO),but whether DDPG can optimize multi-objective discrete particle swarm optimization(MODPSO)remains to be determined.The present work aims to probe into this topic.Experiments showed that the DDPG can not only quickly improve the convergence speed of MODPSO,but also overcome the problem of local optimal solution that MODPSO may suffer.The research findings are of great significance for the theoretical research and application of MODPSO. 展开更多
关键词 Deep deterministic policy gradient multi-objective discrete particle swarm optimization deep reinforcement learning machine learning
在线阅读 下载PDF
A Modified PRP-HS Hybrid Conjugate Gradient Algorithm for Solving Unconstrained Optimization Problems 被引量:1
4
作者 LI Xiangli WANG Zhiling LI Binglan 《应用数学》 北大核心 2025年第2期553-564,共12页
In this paper,we propose a three-term conjugate gradient method for solving unconstrained optimization problems based on the Hestenes-Stiefel(HS)conjugate gradient method and Polak-Ribiere-Polyak(PRP)conjugate gradien... In this paper,we propose a three-term conjugate gradient method for solving unconstrained optimization problems based on the Hestenes-Stiefel(HS)conjugate gradient method and Polak-Ribiere-Polyak(PRP)conjugate gradient method.Under the condition of standard Wolfe line search,the proposed search direction is the descent direction.For general nonlinear functions,the method is globally convergent.Finally,numerical results show that the proposed method is efficient. 展开更多
关键词 Conjugate gradient method Unconstrained optimization Sufficient descent condition Global convergence
在线阅读 下载PDF
Multi-Objective Optimization Design through Machine Learning for Drop-on-Demand Bioprinting 被引量:7
5
作者 Jia Shi Jinchun Song +1 位作者 Bin Song Wen F. Lu 《Engineering》 SCIE EI 2019年第3期586-593,共8页
Drop-on-demand (DOD) bioprinting has been widely used in tissue engineering due to its highthroughput efficiency and cost effectiveness. However, this type of bioprinting involves challenges such as satellite generati... Drop-on-demand (DOD) bioprinting has been widely used in tissue engineering due to its highthroughput efficiency and cost effectiveness. However, this type of bioprinting involves challenges such as satellite generation, too-large droplet generation, and too-low droplet speed. These challenges reduce the stability and precision of DOD printing, disorder cell arrays, and hence generate further structural errors. In this paper, a multi-objective optimization (MOO) design method for DOD printing parameters through fully connected neural networks (FCNNs) is proposed in order to solve these challenges. The MOO problem comprises two objective functions: to develop the satellite formation model with FCNNs;and to decrease droplet diameter and increase droplet speed. A hybrid multi-subgradient descent bundle method with an adaptive learning rate algorithm (HMSGDBA), which combines the multisubgradient descent bundle (MSGDB) method with Adam algorithm, is introduced in order to search for the Pareto-optimal set for the MOO problem. The superiority of HMSGDBA is demonstrated through comparative studies with the MSGDB method. The experimental results show that a single droplet can be printed stably and the droplet speed can be increased from 0.88 to 2.08 m·s^-1 after optimization with the proposed method. The proposed method can improve both printing precision and stability, and is useful in realizing precise cell arrays and complex biological functions. Furthermore, it can be used to obtain guidelines for the setup of cell-printing experimental platforms. 展开更多
关键词 Drop-on-demand printing INKJET gradient descent multi-objective optimization Fully connected neural networks
在线阅读 下载PDF
A New Descent Nonlinear Conjugate Gradient Method for Unconstrained Optimization
6
作者 Hao Fan Zhibin Zhu Anwa Zhou 《Applied Mathematics》 2011年第9期1119-1123,共5页
In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ... In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ensures the Zoutendijk condition to be held, this method is proved to be globally convergent. Finally, we improve it, and do further analysis. 展开更多
关键词 Large Scale UNCONSTRAINED optimization CONJUGATE gradient Method SUFFICIENT descent Property Globally CONVERGENT
在线阅读 下载PDF
HYBRID MULTI-OBJECTIVE GRADIENT ALGORITHM FOR INVERSE PLANNING OF IMRT
7
作者 李国丽 盛大宁 +3 位作者 王俊椋 景佳 王超 闫冰 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI 2010年第1期97-101,共5页
The intelligent optimization of a multi-objective evolutionary algorithm is combined with a gradient algorithm. The hybrid multi-objective gradient algorithm is framed by the real number. Test functions are used to an... The intelligent optimization of a multi-objective evolutionary algorithm is combined with a gradient algorithm. The hybrid multi-objective gradient algorithm is framed by the real number. Test functions are used to analyze the efficiency of the algorithm. In the simulation case of the water phantom, the algorithm is applied to an inverse planning process of intensity modulated radiation treatment (IMRT). The objective functions of planning target volume (PTV) and normal tissue (NT) are based on the average dose distribution. The obtained intensity profile shows that the hybrid multi-objective gradient algorithm saves the computational time and has good accuracy, thus meeting the requirements of practical applications. 展开更多
关键词 gradient methods inverse planning multi-objective optimization hybrid gradient algorithm
暂未订购
A New Nonlinear Conjugate Gradient Method for Unconstrained Optimization Problems 被引量:1
8
作者 LIU Jin-kui WANG Kai-rong +1 位作者 SONG Xiao-qian DU Xiang-lin 《Chinese Quarterly Journal of Mathematics》 CSCD 2010年第3期444-450,共7页
In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wol... In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wolfe line search conditions.Numerical results show that the new method is efficient and stationary by comparing with PRP+ method,so it can be widely used in scientific computation. 展开更多
关键词 unconstrained optimization conjugate gradient method strong Wolfe line search sufficient descent property global convergence
在线阅读 下载PDF
A Primal-Dual SGD Algorithm for Distributed Nonconvex Optimization 被引量:7
9
作者 Xinlei Yi Shengjun Zhang +2 位作者 Tao Yang Tianyou Chai Karl Henrik Johansson 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第5期812-833,共22页
The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of... The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of many machine learning techniques with data parallelism,such as deep learning and federated learning.We propose a distributed primal-dual stochastic gradient descent(SGD)algorithm,suitable for arbitrarily connected communication networks and any smooth(possibly nonconvex)cost functions.We show that the proposed algorithm achieves the linear speedup convergence rate O(1/(√nT))for general nonconvex cost functions and the linear speedup convergence rate O(1/(nT)) when the global cost function satisfies the Polyak-Lojasiewicz(P-L)condition,where T is the total number of iterations.We also show that the output of the proposed algorithm with constant parameters linearly converges to a neighborhood of a global optimum.We demonstrate through numerical experiments the efficiency of our algorithm in comparison with the baseline centralized SGD and recently proposed distributed SGD algorithms. 展开更多
关键词 Distributed nonconvex optimization linear speedup Polyak-Lojasiewicz(P-L)condition primal-dual algorithm stochastic gradient descent
在线阅读 下载PDF
CONVERGENCE RATE OF GRADIENT DESCENT METHOD FOR MULTI-OBJECTIVE OPTIMIZATION 被引量:2
10
作者 Liaoyuan Zeng Yuhong Dai Yakui Huang 《Journal of Computational Mathematics》 SCIE CSCD 2019年第5期689-703,共15页
The convergence rate of the gradient descent method is considered for unconstrained multi-objective optimization problems (MOP). Under standard assumptions, we prove that the gradient descent method with constant step... The convergence rate of the gradient descent method is considered for unconstrained multi-objective optimization problems (MOP). Under standard assumptions, we prove that the gradient descent method with constant stepsizes converges sublinearly when the objective functions are convex and the convergence rate can be strengthened to be linear if the objective functions are strongly convex. The results are also extended to the gradient descent method with the Armijo line search. Hence, we see that the gradient descent method for MOP enjoys the same convergence properties as those for scalar optimization. 展开更多
关键词 multi-objective optimization gradient descent CONVERGENCE rate.
原文传递
A modified three–term conjugate gradient method with sufficient descent property 被引量:1
11
作者 Saman Babaie–Kafaki 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2015年第3期263-272,共10页
A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysi... A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysis, it is shown that search directions of the proposed method satisfy the sufficient descent condition, independent of the line search and the objective function convexity. Global convergence of the method is established under an Armijo–type line search condition. Numerical experiments show practical efficiency of the proposed method. 展开更多
关键词 unconstrained optimization conjugate gradient method EIGENVALUE sufficient descent condition global convergence
在线阅读 下载PDF
A Descent Gradient Method and Its Global Convergence
12
作者 LIU Jin-kui 《Chinese Quarterly Journal of Mathematics》 CSCD 2014年第1期142-150,共9页
Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new de... Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new descent gradient method based on the LS method.It can guarantee the sufficient descent property at each iteration and the global convergence under the strong Wolfe line search.Finally,we also present extensive preliminary numerical experiments to show the efficiency of the proposed method by comparing with the famous PRP^+method. 展开更多
关键词 unconstrained optimization conjugate gradient method strong Wolfe line search sufficient descent property global convergence
在线阅读 下载PDF
A Comparative Study of Optimization Techniques on the Rosenbrock Function
13
作者 Lebede Ngartera Coumba Diallo 《Open Journal of Optimization》 2024年第3期51-63,共13页
In the evolving landscape of artificial intelligence and machine learning, the choice of optimization algorithm can significantly impact the success of model training and the accuracy of predictions. This paper embark... In the evolving landscape of artificial intelligence and machine learning, the choice of optimization algorithm can significantly impact the success of model training and the accuracy of predictions. This paper embarks on a rigorous and comprehensive exploration of widely adopted optimization techniques, specifically focusing on their performance when applied to the notoriously challenging Rosenbrock function. As a benchmark problem known for its deceptive curvature and narrow valleys, the Rosenbrock function provides a fertile ground for examining the nuances and intricacies of algorithmic behavior. The study delves into a diverse array of optimization methods, including traditional Gradient Descent, its stochastic variant (SGD), and the more sophisticated Gradient Descent with Momentum. The investigation further extends to adaptive methods like RMSprop, AdaGrad, and the highly regarded Adam optimizer. By meticulously analyzing and visualizing the optimization paths, convergence rates, and gradient norms, this paper uncovers critical insights into the strengths and limitations of each technique. Our findings not only illuminate the intricate dynamics of these algorithms but also offer actionable guidance for their deployment in complex, real-world optimization problems. This comparative analysis promises to intrigue and inspire researchers and practitioners alike, as it reveals the subtle yet profound impacts of algorithmic choices in the quest for optimization excellence. 展开更多
关键词 Machine Learning optimization Algorithm Rosenbrock Function gradient descent
在线阅读 下载PDF
欺骗性干扰场景下的功率带宽联合分配策略
14
作者 李辉 武会斌 +2 位作者 王伟东 张恺 侯庆华 《电子科技》 2026年第2期19-27,共9页
针对欺骗性干扰导致的雷达性能下降问题,文中提出了一种功率带宽联合分配方案来提高雷达的探测精度,并借助高探测性能来提高雷达的抗干扰决策能力。以欺骗性距离的三维CRLB(Cramer-Rao Lower Bound)来代表雷达的探测精度,并将CRLB作为... 针对欺骗性干扰导致的雷达性能下降问题,文中提出了一种功率带宽联合分配方案来提高雷达的探测精度,并借助高探测性能来提高雷达的抗干扰决策能力。以欺骗性距离的三维CRLB(Cramer-Rao Lower Bound)来代表雷达的探测精度,并将CRLB作为目标函数建立优化问题。在考虑资源有限情况下,将优化问题中的功率资源总量和带宽资源总量限制在固定范围内。根据资源优化分配问题的非凸非线性特点提出了循环最小化算法和投影梯度下降算法相结合的解决方案。在不同雷达布局下进行仿真实验。仿真结果表明,相较于未优化的分配方案,资源联合优化的分配方案的CRLB数值降低了20%~30%,从而提高了雷达的探测精度,并缓解了欺骗性干扰导致的性能下降问题。 展开更多
关键词 分布式MIMO雷达 欺骗性干扰 假目标辨识 雷达资源分配 CRLB 循环最小化算法 非凸优化问题求解 投影梯度下降算法
在线阅读 下载PDF
ENTROPICAL OPTIMAL TRANSPORT,SCHRODINGER'S SYSTEM AND ALGORITHMS
15
作者 Liming WU 《Acta Mathematica Scientia》 SCIE CSCD 2021年第6期2183-2197,共15页
In this exposition paper we present the optimal transport problem of Monge-Ampère-Kantorovitch(MAK in short)and its approximative entropical regularization.Contrary to the MAK optimal transport problem,the soluti... In this exposition paper we present the optimal transport problem of Monge-Ampère-Kantorovitch(MAK in short)and its approximative entropical regularization.Contrary to the MAK optimal transport problem,the solution of the entropical optimal transport problem is always unique,and is characterized by the Schrödinger system.The relationship between the Schrödinger system,the associated Bernstein process and the optimal transport was developed by Léonard[32,33](and by Mikami[39]earlier via an h-process).We present Sinkhorn’s algorithm for solving the Schrödinger system and the recent results on its convergence rate.We study the gradient descent algorithm based on the dual optimal question and prove its exponential convergence,whose rate might be independent of the regularization constant.This exposition is motivated by recent applications of optimal transport to different domains such as machine learning,image processing,econometrics,astrophysics etc.. 展开更多
关键词 entropical optimal transport Schrödinger system Sinkhorn’s algorithm gradient descent
在线阅读 下载PDF
A Novel Optimizer in Deep Neural Network for Diabetic Retinopathy Classification
16
作者 Pranamita Nanda N.Duraipandian 《Computer Systems Science & Engineering》 SCIE EI 2022年第12期1099-1110,共12页
In severe cases, diabetic retinopathy can lead to blindness. For decades,automatic classification of diabetic retinopathy images has been a challenge. Medical image processing has benefited from advances in deep learn... In severe cases, diabetic retinopathy can lead to blindness. For decades,automatic classification of diabetic retinopathy images has been a challenge. Medical image processing has benefited from advances in deep learning systems. Toenhance the accuracy of image classification driven by Convolutional Neural Network (CNN), balanced dataset is generated by data augmentation method followed by an optimized algorithm. Deep neural networks (DNN) are frequentlyoptimized using gradient (GD) based techniques. Vanishing gradient is the maindrawback of GD algorithms. In this paper, we suggest an innovative algorithm, tosolve the above problem, Hypergradient Descent learning rate based Quasi hyperbolic (HDQH) gradient descent to optimize the weights and biases. The algorithms only use first order gradients, which reduces computation time andstorage space requirements. The algorithms do not require more tuning of thelearning rates as the learning rate tunes itself by means of gradients. We presentempirical evaluation of our algorithm on two public retinal image datasets such asMessidor and DDR by using Resnet18 and Inception V3 architectures. The findings of the experiment show that the efficiency and accuracy of our algorithm outperforms the other cutting-edge algorithms. HDQHAdam shows the highestaccuracy of 97.5 on Resnet18 and 95.7 on Inception V3 models respectively. 展开更多
关键词 CNN diabetic retinopathy data augmentation gradient descent deep learning optimization
在线阅读 下载PDF
An Objective-Based Gradient Method for Locating the Pareto Domain
17
作者 Allan Vandervoort Jules Thibault Yash Gupta 《Journal of Chemistry and Chemical Engineering》 2011年第7期608-623,共16页
In this paper, an objective-based gradient multi-objective optimization (MOO) technique, the Objective-Based Gradient Algorithm (OBGA), is proposed with the goal of defining the Pareto domain more precisely and ef... In this paper, an objective-based gradient multi-objective optimization (MOO) technique, the Objective-Based Gradient Algorithm (OBGA), is proposed with the goal of defining the Pareto domain more precisely and efficiently than current MOO techniques. The performance of the OBGA in locating the Pareto domain was evaluated in terms of precision, computation time and number of objective function calls, and compared to two current MOO algorithms: Dual Population Evolutionary Algorithm (DPEA) and Non-Dominated Sorting Genetic Algorithm I1 (NSGA-II), using four test problems. For all test problems, the OBGA systematically produced a more precise Pareto domain than DPEA and NSGA-II. With the adequate selection of the OBGA parameters, computation time required for the OBGA can be lower than that required for DPEA and NSGA-II. Results clearly show that the OBGA is a very effective and efficient algorithm for locating the Pareto domain. 展开更多
关键词 Pareto domain multi-objective optimization gradient method.
在线阅读 下载PDF
计算机网络中基于集成式图卷积神经网络的入侵检测技术 被引量:1
18
作者 范申民 王磊 张芬 《自动化与仪器仪表》 2025年第5期7-11,共5页
为了保障网络环境的安全性,提出了基于集成式图卷积神经网络算法的网络入侵检测技术。研究方法采用随机梯度下降算法和均方根传播(Root Mean Square Propagation,RMSProp)优化器提升了检测模型的训练效率,强化了检测模型的分类效果。研... 为了保障网络环境的安全性,提出了基于集成式图卷积神经网络算法的网络入侵检测技术。研究方法采用随机梯度下降算法和均方根传播(Root Mean Square Propagation,RMSProp)优化器提升了检测模型的训练效率,强化了检测模型的分类效果。研究结果显示,研究模型的入侵检测准确率为96.41%~97.18%。可见经过研究模型优化后,入侵检测技术在模型训练效率和模型训练精度上都有明显提升。研究模型可以根据访问来源进行数据分类,提升了入侵检测模型对访问行为的分类效果。同时,分类效果的提升优化了计算机对攻击行为的识别效率,使计算机的防御效果增强,有效保障了用户的网络安全环境。因此,研究为网络入侵行为的检测提供了一个识别效果较好的技术方法。 展开更多
关键词 集成式图卷积神经网络 网络入侵检测 随机梯度下降 RMSProp优化器
原文传递
基于GD-PSO的水电站地下洞室初始地应力场反演
19
作者 包腾飞 程健悦 +3 位作者 邢钰 周喜武 陈雨婷 赵向宇 《郑州大学学报(工学版)》 北大核心 2025年第5期130-136,共7页
针对现有的初始地应力场反演方法难以平衡收敛速度和非线性回归精度的问题,提出了一种联合梯度下降法(GD)和粒子群优化算法(PSO)的初始地应力场反演分析方法。首先,考虑影响初始地应力场的重力场及5种构造应力场的8种基础边界条件,利用... 针对现有的初始地应力场反演方法难以平衡收敛速度和非线性回归精度的问题,提出了一种联合梯度下降法(GD)和粒子群优化算法(PSO)的初始地应力场反演分析方法。首先,考虑影响初始地应力场的重力场及5种构造应力场的8种基础边界条件,利用有限元软件计算各边界条件下测点应力值;其次,以实测地应力值为目标值,利用GD-PSO算法进行回归分析,得到各边界条件的影响系数;最后,计算模型各点的回归地应力值,并作为初始地应力场输入三维有限元模型进行地应力平衡。实例分析表明:对比使用PSO算法的计算结果,使用GD-PSO算法求得的三次回归多项式精度最高,均方误差为0.579,回归结果与实测地应力值拟合较好,地应力平衡后除竖直方向应力值外,测点地应力值与实测值差值较小,围岩各向位移基本为零,最大位移仅有5.26 mm。 展开更多
关键词 大型抽水蓄能电站 地下洞室群 地应力反演 梯度下降法 粒子群优化算法
在线阅读 下载PDF
基于粒子群-梯度下降混合优化的物镜偏振像差测量研究
20
作者 裴世鑫 郑改革 曹兆楼 《仪器仪表学报》 北大核心 2025年第8期198-205,共8页
受到镀膜及材料双折射、内部应力的影响,高数值孔径光学系统不可避免存在一定的偏振像差,使得系统的成像质量与入射光偏振态相关。现有偏振像差测量技术一般装置较为复杂,测量不便,效率较低。针对此问题,提出了使用往返式光路进行测量,... 受到镀膜及材料双折射、内部应力的影响,高数值孔径光学系统不可避免存在一定的偏振像差,使得系统的成像质量与入射光偏振态相关。现有偏振像差测量技术一般装置较为复杂,测量不便,效率较低。针对此问题,提出了使用往返式光路进行测量,利用偏振分辨波前测量技术从聚焦光场的强度分布反演显微物镜的Jones矩阵,降低系统复杂度。首先,基于光线追迹及标量衍射理论建立了数值计算模型,模拟聚焦光场不同轴向位置处的强度分布;其次,将复振幅反演转化为最优化问题,进而利用粒子群及梯度下降混合优化算法建立了反演模型,通过优化表征偏振像差的Zernike多项式系数使得预测与目标强度分布之间的偏差最小化,实现光学系统Jones矩阵的反演;再次,基于数值模拟测试了给定偏振像差时模型的反演结果,Jones矩阵元素反演结果与目标值符合较好,误差值<10^(-3);最后,实验测量了商用高数值孔径显微物镜的偏振像差,反演了Zernike多项式系数,预测与目标强度分布保持一致。理论与实验结果表明,所提出的算法能够有效获得显微物镜的偏振像差,且具有结构简单、操作方便的特点,有望为高数值孔径光学系统的制造检测提供一个新的技术手段。 展开更多
关键词 偏振像差 相位复原 粒子群优化 梯度下降 并行计算 JONES矩阵
原文传递
上一页 1 2 14 下一页 到第
使用帮助 返回顶部