期刊文献+
共找到283篇文章
< 1 2 15 >
每页显示 20 50 100
Performance Analysis and Multi-Objective Optimization of Functional Gradient Honeycomb Non-pneumatic Tires
1
作者 Haichao Zhou Haifeng Zhou +2 位作者 Haoze Ren Zhou Zheng Guolin Wang 《Chinese Journal of Mechanical Engineering》 2025年第3期412-431,共20页
The spoke as a key component has a significant impact on the performance of the non-pneumatic tire(NPT).The current research has focused on adjusting spoke structures to improve the single performance of NPT.Few studi... The spoke as a key component has a significant impact on the performance of the non-pneumatic tire(NPT).The current research has focused on adjusting spoke structures to improve the single performance of NPT.Few studies have been conducted to synergistically improve multi-performance by optimizing the spoke structure.Inspired by the concept of functionally gradient structures,this paper introduces a functionally gradient honeycomb NPT and its optimization method.Firstly,this paper completes the parameterization of the honeycomb spoke structure and establishes the numerical models of honeycomb NPTs with seven different gradients.Subsequently,the accuracy of the numerical models is verified using experimental methods.Then,the static and dynamic characteristics of these gradient honeycomb NPTs are thoroughly examined by using the finite element method.The findings highlight that the gradient structure of NPT-3 has superior performance.Building upon this,the study investigates the effects of key parameters,such as honeycomb spoke thickness and length,on load-carrying capacity,honeycomb spoke stress and mass.Finally,a multi-objective optimization method is proposed that uses a response surface model(RSM)and the Nondominated Sorting Genetic Algorithm-II(NSGA-II)to further optimize the functional gradient honeycomb NPTs.The optimized NPT-OP shows a 23.48%reduction in radial stiffness,8.95%reduction in maximum spoke stress and 16.86%reduction in spoke mass compared to the initial NPT-1.The damping characteristics of the NPT-OP have also been improved.The results offer a theoretical foundation and technical methodology for the structural design and optimization of gradient honeycomb NPTs. 展开更多
关键词 Non-pneumatic tires Honeycomb structure gradient structure multi-objective optimization
在线阅读 下载PDF
Integrating Conjugate Gradients Into Evolutionary Algorithms for Large-Scale Continuous Multi-Objective Optimization 被引量:7
2
作者 Ye Tian Haowen Chen +3 位作者 Haiping Ma Xingyi Zhang Kay Chen Tan Yaochu Jin 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第10期1801-1817,共17页
Large-scale multi-objective optimization problems(LSMOPs)pose challenges to existing optimizers since a set of well-converged and diverse solutions should be found in huge search spaces.While evolutionary algorithms a... Large-scale multi-objective optimization problems(LSMOPs)pose challenges to existing optimizers since a set of well-converged and diverse solutions should be found in huge search spaces.While evolutionary algorithms are good at solving small-scale multi-objective optimization problems,they are criticized for low efficiency in converging to the optimums of LSMOPs.By contrast,mathematical programming methods offer fast convergence speed on large-scale single-objective optimization problems,but they have difficulties in finding diverse solutions for LSMOPs.Currently,how to integrate evolutionary algorithms with mathematical programming methods to solve LSMOPs remains unexplored.In this paper,a hybrid algorithm is tailored for LSMOPs by coupling differential evolution and a conjugate gradient method.On the one hand,conjugate gradients and differential evolution are used to update different decision variables of a set of solutions,where the former drives the solutions to quickly converge towards the Pareto front and the latter promotes the diversity of the solutions to cover the whole Pareto front.On the other hand,objective decomposition strategy of evolutionary multi-objective optimization is used to differentiate the conjugate gradients of solutions,and the line search strategy of mathematical programming is used to ensure the higher quality of each offspring than its parent.In comparison with state-of-the-art evolutionary algorithms,mathematical programming methods,and hybrid algorithms,the proposed algorithm exhibits better convergence and diversity performance on a variety of benchmark and real-world LSMOPs. 展开更多
关键词 Conjugate gradient differential evolution evolutionary computation large-scale multi-objective optimization mathematical programming
在线阅读 下载PDF
Optimizing the Multi-Objective Discrete Particle Swarm Optimization Algorithm by Deep Deterministic Policy Gradient Algorithm
3
作者 Sun Yang-Yang Yao Jun-Ping +2 位作者 Li Xiao-Jun Fan Shou-Xiang Wang Zi-Wei 《Journal on Artificial Intelligence》 2022年第1期27-35,共9页
Deep deterministic policy gradient(DDPG)has been proved to be effective in optimizing particle swarm optimization(PSO),but whether DDPG can optimize multi-objective discrete particle swarm optimization(MODPSO)remains ... Deep deterministic policy gradient(DDPG)has been proved to be effective in optimizing particle swarm optimization(PSO),but whether DDPG can optimize multi-objective discrete particle swarm optimization(MODPSO)remains to be determined.The present work aims to probe into this topic.Experiments showed that the DDPG can not only quickly improve the convergence speed of MODPSO,but also overcome the problem of local optimal solution that MODPSO may suffer.The research findings are of great significance for the theoretical research and application of MODPSO. 展开更多
关键词 Deep deterministic policy gradient multi-objective discrete particle swarm optimization deep reinforcement learning machine learning
在线阅读 下载PDF
A Modified PRP-HS Hybrid Conjugate Gradient Algorithm for Solving Unconstrained Optimization Problems 被引量:1
4
作者 LI Xiangli WANG Zhiling LI Binglan 《应用数学》 北大核心 2025年第2期553-564,共12页
In this paper,we propose a three-term conjugate gradient method for solving unconstrained optimization problems based on the Hestenes-Stiefel(HS)conjugate gradient method and Polak-Ribiere-Polyak(PRP)conjugate gradien... In this paper,we propose a three-term conjugate gradient method for solving unconstrained optimization problems based on the Hestenes-Stiefel(HS)conjugate gradient method and Polak-Ribiere-Polyak(PRP)conjugate gradient method.Under the condition of standard Wolfe line search,the proposed search direction is the descent direction.For general nonlinear functions,the method is globally convergent.Finally,numerical results show that the proposed method is efficient. 展开更多
关键词 Conjugate gradient method Unconstrained optimization Sufficient descent condition Global convergence
在线阅读 下载PDF
Gradient Descent-Based Prediction of Heat-Transmission Rate of Engine Oil-Based Hybrid Nanofluid over Trapezoidal and Rectangular Fins for Sustainable Energy Systems
5
作者 Maddina Dinesh Kumar S.U.Mamatha +2 位作者 Khalid Masood Nehad Ali Shah Se-Jin Yook 《Computer Modeling in Engineering & Sciences》 2026年第1期627-660,共34页
Fluid dynamic research on rectangular and trapezoidal fins is aimed at increasing heat transfer by means of large surfaces.The trapezoidal cavity form is compared with its thermal and flow performance,and it is reveal... Fluid dynamic research on rectangular and trapezoidal fins is aimed at increasing heat transfer by means of large surfaces.The trapezoidal cavity form is compared with its thermal and flow performance,and it is revealed that trapezoidal fins tend to be more efficient,particularly when material optimization is critical.Motivated by the increasing need for sustainable energy management,this work analyses the thermal performance of inclined trapezoidal and rectangular porous fins utilising a unique hybrid nanofluid.The effectiveness of nanoparticles in a working fluid is primarily determined by their thermophysical properties;hence,optimising these properties can significantly improve overall performance.This study considers the dispersion of Graphene Oxide(GO)and Molybdenum Disulfide in the base fluid,engine oil.Temperature profiles are analysed by altering the radiative,porosity,wet porous,and angle of inclination parameters.Surface and contour plots are constructed by using the Lobatto IIIa Collocation Method with BVP5C solver in MATLAB and Gradient Descent Optimisation to predict the combined heat transfer rate.According to the study,fluid temperature consistently decreases when the angle of inclination,wet porous parameter,porosity parameter,and radiative parameter increase,suggesting significantly improved heat dissipation.The trapezoidal fin consistently exhibits a superior heat transfer mechanism than a rectangular fin.It is found that the trapezoidal fin transmits heat at a rate that is 0.05%higher than that of the rectangular fin.Validation of the present study is done through the comparison of previous studies.This research provides useful design insights for sophisticated engineering uses,including electrical cooling devices,heat exchangers,radiators,and solar heaters. 展开更多
关键词 Rectangular fin hybrid nanofluid trapezoidal fin angle of inclination gradient descent optimization Lobatto IIIa collocation method
在线阅读 下载PDF
Multi-Objective Optimization Design through Machine Learning for Drop-on-Demand Bioprinting 被引量:7
6
作者 Jia Shi Jinchun Song +1 位作者 Bin Song Wen F. Lu 《Engineering》 SCIE EI 2019年第3期586-593,共8页
Drop-on-demand (DOD) bioprinting has been widely used in tissue engineering due to its highthroughput efficiency and cost effectiveness. However, this type of bioprinting involves challenges such as satellite generati... Drop-on-demand (DOD) bioprinting has been widely used in tissue engineering due to its highthroughput efficiency and cost effectiveness. However, this type of bioprinting involves challenges such as satellite generation, too-large droplet generation, and too-low droplet speed. These challenges reduce the stability and precision of DOD printing, disorder cell arrays, and hence generate further structural errors. In this paper, a multi-objective optimization (MOO) design method for DOD printing parameters through fully connected neural networks (FCNNs) is proposed in order to solve these challenges. The MOO problem comprises two objective functions: to develop the satellite formation model with FCNNs;and to decrease droplet diameter and increase droplet speed. A hybrid multi-subgradient descent bundle method with an adaptive learning rate algorithm (HMSGDBA), which combines the multisubgradient descent bundle (MSGDB) method with Adam algorithm, is introduced in order to search for the Pareto-optimal set for the MOO problem. The superiority of HMSGDBA is demonstrated through comparative studies with the MSGDB method. The experimental results show that a single droplet can be printed stably and the droplet speed can be increased from 0.88 to 2.08 m·s^-1 after optimization with the proposed method. The proposed method can improve both printing precision and stability, and is useful in realizing precise cell arrays and complex biological functions. Furthermore, it can be used to obtain guidelines for the setup of cell-printing experimental platforms. 展开更多
关键词 Drop-on-demand printing INKJET gradient descent multi-objective optimization Fully connected neural networks
在线阅读 下载PDF
A New Descent Nonlinear Conjugate Gradient Method for Unconstrained Optimization
7
作者 Hao Fan Zhibin Zhu Anwa Zhou 《Applied Mathematics》 2011年第9期1119-1123,共5页
In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ... In this paper, a new nonlinear conjugate gradient method is proposed for large-scale unconstrained optimization. The sufficient descent property holds without any line searches. We use some steplength technique which ensures the Zoutendijk condition to be held, this method is proved to be globally convergent. Finally, we improve it, and do further analysis. 展开更多
关键词 Large Scale UNCONSTRAINED optimization CONJUGATE gradient Method SUFFICIENT descent Property Globally CONVERGENT
在线阅读 下载PDF
Learning to optimize by multi-gradient for multi-objective optimization
8
作者 Linxi Yang Xinmin Yang Liping Tang 《Science China Mathematics》 2026年第2期539-570,共32页
The development of artificial intelligence for science has led to the emergence of learning-based research paradigms,necessitating a compelling reevaluation of the design of multi-objective optimization(MOO)methods.Th... The development of artificial intelligence for science has led to the emergence of learning-based research paradigms,necessitating a compelling reevaluation of the design of multi-objective optimization(MOO)methods.The new generation MOO methods should be rooted in automated learning rather than manual design.In this paper,we introduce a new automatic learning paradigm for optimizing MOO problems,and propose a multi-gradient learning to optimize(ML2O)method,which automatically learns a generator(or mappings)from multiple gradients to update directions.As a learning-based method,ML2O acquires knowledge of local landscapes by leveraging information from the current step and incorporates global experience extracted from historical iteration trajectory data.By introducing a new guarding mechanism,we propose a guarded multi-gradient learning to optimize(GML2O)method,and prove that the iterative sequence generated by GML2O converges to a Pareto stationary point.The experimental results demonstrate that our learned optimizer outperforms hand-designed competitors on training the multi-task learning neural network. 展开更多
关键词 multi-objective optimization learning to optimize stochastic gradient method SAFEGUARD
原文传递
HYBRID MULTI-OBJECTIVE GRADIENT ALGORITHM FOR INVERSE PLANNING OF IMRT
9
作者 李国丽 盛大宁 +3 位作者 王俊椋 景佳 王超 闫冰 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI 2010年第1期97-101,共5页
The intelligent optimization of a multi-objective evolutionary algorithm is combined with a gradient algorithm. The hybrid multi-objective gradient algorithm is framed by the real number. Test functions are used to an... The intelligent optimization of a multi-objective evolutionary algorithm is combined with a gradient algorithm. The hybrid multi-objective gradient algorithm is framed by the real number. Test functions are used to analyze the efficiency of the algorithm. In the simulation case of the water phantom, the algorithm is applied to an inverse planning process of intensity modulated radiation treatment (IMRT). The objective functions of planning target volume (PTV) and normal tissue (NT) are based on the average dose distribution. The obtained intensity profile shows that the hybrid multi-objective gradient algorithm saves the computational time and has good accuracy, thus meeting the requirements of practical applications. 展开更多
关键词 gradient methods inverse planning multi-objective optimization hybrid gradient algorithm
暂未订购
A New Nonlinear Conjugate Gradient Method for Unconstrained Optimization Problems 被引量:1
10
作者 LIU Jin-kui WANG Kai-rong +1 位作者 SONG Xiao-qian DU Xiang-lin 《Chinese Quarterly Journal of Mathematics》 CSCD 2010年第3期444-450,共7页
In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wol... In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wolfe line search conditions.Numerical results show that the new method is efficient and stationary by comparing with PRP+ method,so it can be widely used in scientific computation. 展开更多
关键词 unconstrained optimization conjugate gradient method strong Wolfe line search sufficient descent property global convergence
在线阅读 下载PDF
Distributed momentum gradient descent convex optimization algorithm with network communication
11
作者 Pengfei LIU Haiyin PIAO +1 位作者 Rui WANG Kunzhi LIU 《Science China(Technological Sciences)》 2026年第1期162-177,共16页
This paper proposes a distributed continuous-time momentum gradient descent(MGD)algorithm for convex optimization over multi-agent networks,where agents collaboratively minimize the sum of local convex cost functions ... This paper proposes a distributed continuous-time momentum gradient descent(MGD)algorithm for convex optimization over multi-agent networks,where agents collaboratively minimize the sum of local convex cost functions through coordinated communication.First,we establish exponential convergence under ideal continuous-time coordination through Lyapunov analysis.To bridge the gap between theoretical designs and digital implementations,two strategies are developed:(1)a time-triggered control(TTC)scheme that guarantees stability under bounded sampling intervals;(2)a periodic event-triggered control(PETC)strategy.Notably,the PETC strategy is introduced to address the inefficiency in network resource utilization inherent in TTC by activating communication only when necessary.By formulating the PETC-based algorithm as a hybrid dynamical system with event-driven thresholds,we subsequently construct a parameterized hybrid Lyapunov function to rigorously prove the global asymptotic stability of the equilibrium point.Comprehensive numerical experiments confirm the convergence of the algorithm under both strategies,with PETC achieving a reduction in communication frequency compared to TTC,while maintaining solution accuracy. 展开更多
关键词 distributed convex optimization momentum gradient descent hybrid systems periodic event-triggered control
原文传递
A Primal-Dual SGD Algorithm for Distributed Nonconvex Optimization 被引量:8
12
作者 Xinlei Yi Shengjun Zhang +2 位作者 Tao Yang Tianyou Chai Karl Henrik Johansson 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第5期812-833,共22页
The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of... The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of many machine learning techniques with data parallelism,such as deep learning and federated learning.We propose a distributed primal-dual stochastic gradient descent(SGD)algorithm,suitable for arbitrarily connected communication networks and any smooth(possibly nonconvex)cost functions.We show that the proposed algorithm achieves the linear speedup convergence rate O(1/(√nT))for general nonconvex cost functions and the linear speedup convergence rate O(1/(nT)) when the global cost function satisfies the Polyak-Lojasiewicz(P-L)condition,where T is the total number of iterations.We also show that the output of the proposed algorithm with constant parameters linearly converges to a neighborhood of a global optimum.We demonstrate through numerical experiments the efficiency of our algorithm in comparison with the baseline centralized SGD and recently proposed distributed SGD algorithms. 展开更多
关键词 Distributed nonconvex optimization linear speedup Polyak-Lojasiewicz(P-L)condition primal-dual algorithm stochastic gradient descent
在线阅读 下载PDF
CONVERGENCE RATE OF GRADIENT DESCENT METHOD FOR MULTI-OBJECTIVE OPTIMIZATION 被引量:2
13
作者 Liaoyuan Zeng Yuhong Dai Yakui Huang 《Journal of Computational Mathematics》 SCIE CSCD 2019年第5期689-703,共15页
The convergence rate of the gradient descent method is considered for unconstrained multi-objective optimization problems (MOP). Under standard assumptions, we prove that the gradient descent method with constant step... The convergence rate of the gradient descent method is considered for unconstrained multi-objective optimization problems (MOP). Under standard assumptions, we prove that the gradient descent method with constant stepsizes converges sublinearly when the objective functions are convex and the convergence rate can be strengthened to be linear if the objective functions are strongly convex. The results are also extended to the gradient descent method with the Armijo line search. Hence, we see that the gradient descent method for MOP enjoys the same convergence properties as those for scalar optimization. 展开更多
关键词 multi-objective optimization gradient descent CONVERGENCE rate.
原文传递
A modified three–term conjugate gradient method with sufficient descent property 被引量:1
14
作者 Saman Babaie–Kafaki 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2015年第3期263-272,共10页
A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysi... A hybridization of the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Polak and Ribi`ere, and Polyak is suggested. Based on an eigenvalue analysis, it is shown that search directions of the proposed method satisfy the sufficient descent condition, independent of the line search and the objective function convexity. Global convergence of the method is established under an Armijo–type line search condition. Numerical experiments show practical efficiency of the proposed method. 展开更多
关键词 unconstrained optimization conjugate gradient method EIGENVALUE sufficient descent condition global convergence
在线阅读 下载PDF
A Descent Gradient Method and Its Global Convergence
15
作者 LIU Jin-kui 《Chinese Quarterly Journal of Mathematics》 CSCD 2014年第1期142-150,共9页
Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new de... Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new descent gradient method based on the LS method.It can guarantee the sufficient descent property at each iteration and the global convergence under the strong Wolfe line search.Finally,we also present extensive preliminary numerical experiments to show the efficiency of the proposed method by comparing with the famous PRP^+method. 展开更多
关键词 unconstrained optimization conjugate gradient method strong Wolfe line search sufficient descent property global convergence
在线阅读 下载PDF
A Comparative Study of Optimization Techniques on the Rosenbrock Function
16
作者 Lebede Ngartera Coumba Diallo 《Open Journal of Optimization》 2024年第3期51-63,共13页
In the evolving landscape of artificial intelligence and machine learning, the choice of optimization algorithm can significantly impact the success of model training and the accuracy of predictions. This paper embark... In the evolving landscape of artificial intelligence and machine learning, the choice of optimization algorithm can significantly impact the success of model training and the accuracy of predictions. This paper embarks on a rigorous and comprehensive exploration of widely adopted optimization techniques, specifically focusing on their performance when applied to the notoriously challenging Rosenbrock function. As a benchmark problem known for its deceptive curvature and narrow valleys, the Rosenbrock function provides a fertile ground for examining the nuances and intricacies of algorithmic behavior. The study delves into a diverse array of optimization methods, including traditional Gradient Descent, its stochastic variant (SGD), and the more sophisticated Gradient Descent with Momentum. The investigation further extends to adaptive methods like RMSprop, AdaGrad, and the highly regarded Adam optimizer. By meticulously analyzing and visualizing the optimization paths, convergence rates, and gradient norms, this paper uncovers critical insights into the strengths and limitations of each technique. Our findings not only illuminate the intricate dynamics of these algorithms but also offer actionable guidance for their deployment in complex, real-world optimization problems. This comparative analysis promises to intrigue and inspire researchers and practitioners alike, as it reveals the subtle yet profound impacts of algorithmic choices in the quest for optimization excellence. 展开更多
关键词 Machine Learning optimization Algorithm Rosenbrock Function gradient descent
在线阅读 下载PDF
基于自动可微分激活函数近似的神经网络隐私推理加速方法
17
作者 顾颖 庞智 +1 位作者 余荣威 王丽娜 《武汉大学学报(理学版)》 北大核心 2026年第1期35-46,共12页
针对隐私推理方案(如同态加密、安全多方计算等)中ReLU(非线性单元)的效率瓶颈,提出了一种神经网络非线性优化方法——ReLURep(ReLU Replace,ReLU替换)框架,采用自动可微分的梯度下降和二值化辅助控制掩码自动定位并替换ReLU操作,从而... 针对隐私推理方案(如同态加密、安全多方计算等)中ReLU(非线性单元)的效率瓶颈,提出了一种神经网络非线性优化方法——ReLURep(ReLU Replace,ReLU替换)框架,采用自动可微分的梯度下降和二值化辅助控制掩码自动定位并替换ReLU操作,从而在效率和准确性之间取得平衡。ReLURep利用可学习的二值掩码定位待替换的ReLU激活函数,在掩码确定后以端到端共同训练的方式自动学习最佳多项式参数配置,减少因ReLU替换导致的模型性能下降。此外,ReLURep框架中还采用了一种特征分布蒸馏方法,逐层学习最佳系数,最大程度地减少因网络线性化造成的精度损失。在CIFAR10、CIFAR100和Tiny ImageNet等数据集上进行的实验表明,该方法在绝大多数ReLU操作数量预算下都取得了显著的改进。在CIFAR100数据集上,ReLU预算为6000时,正确率达到了75.29%,比现有最优方法高1.5个百分点。 展开更多
关键词 深度学习 隐私保护 隐私推理加速 多项式函数近似 激活函数优化 梯度下降
原文传递
地形抛物方程的折射率反演问题
18
作者 李晓燕 《兰州文理学院学报(自然科学版)》 2026年第1期15-22,共8页
针对非均匀介质中地形抛物方程折射率的复数域反演问题,提出了一种基于最优控制理论的梯度下降算法.通过建立包含正则化约束的目标泛函优化模型,设计了一种高效的迭代求解方案.数值实验表明,在典型非均匀介质条件下,该算法能够精确地重... 针对非均匀介质中地形抛物方程折射率的复数域反演问题,提出了一种基于最优控制理论的梯度下降算法.通过建立包含正则化约束的目标泛函优化模型,设计了一种高效的迭代求解方案.数值实验表明,在典型非均匀介质条件下,该算法能够精确地重构复数域折射率的空间分布,并在存在测量噪声的情况下表现出良好的鲁棒性. 展开更多
关键词 反折射率问题 最优控制 梯度下降法 数值实验
在线阅读 下载PDF
欺骗性干扰场景下的功率带宽联合分配策略
19
作者 李辉 武会斌 +2 位作者 王伟东 张恺 侯庆华 《电子科技》 2026年第2期19-27,共9页
针对欺骗性干扰导致的雷达性能下降问题,文中提出了一种功率带宽联合分配方案来提高雷达的探测精度,并借助高探测性能来提高雷达的抗干扰决策能力。以欺骗性距离的三维CRLB(Cramer-Rao Lower Bound)来代表雷达的探测精度,并将CRLB作为... 针对欺骗性干扰导致的雷达性能下降问题,文中提出了一种功率带宽联合分配方案来提高雷达的探测精度,并借助高探测性能来提高雷达的抗干扰决策能力。以欺骗性距离的三维CRLB(Cramer-Rao Lower Bound)来代表雷达的探测精度,并将CRLB作为目标函数建立优化问题。在考虑资源有限情况下,将优化问题中的功率资源总量和带宽资源总量限制在固定范围内。根据资源优化分配问题的非凸非线性特点提出了循环最小化算法和投影梯度下降算法相结合的解决方案。在不同雷达布局下进行仿真实验。仿真结果表明,相较于未优化的分配方案,资源联合优化的分配方案的CRLB数值降低了20%~30%,从而提高了雷达的探测精度,并缓解了欺骗性干扰导致的性能下降问题。 展开更多
关键词 分布式MIMO雷达 欺骗性干扰 假目标辨识 雷达资源分配 CRLB 循环最小化算法 非凸优化问题求解 投影梯度下降算法
在线阅读 下载PDF
ENTROPICAL OPTIMAL TRANSPORT,SCHRODINGER'S SYSTEM AND ALGORITHMS
20
作者 Liming WU 《Acta Mathematica Scientia》 SCIE CSCD 2021年第6期2183-2197,共15页
In this exposition paper we present the optimal transport problem of Monge-Ampère-Kantorovitch(MAK in short)and its approximative entropical regularization.Contrary to the MAK optimal transport problem,the soluti... In this exposition paper we present the optimal transport problem of Monge-Ampère-Kantorovitch(MAK in short)and its approximative entropical regularization.Contrary to the MAK optimal transport problem,the solution of the entropical optimal transport problem is always unique,and is characterized by the Schrödinger system.The relationship between the Schrödinger system,the associated Bernstein process and the optimal transport was developed by Léonard[32,33](and by Mikami[39]earlier via an h-process).We present Sinkhorn’s algorithm for solving the Schrödinger system and the recent results on its convergence rate.We study the gradient descent algorithm based on the dual optimal question and prove its exponential convergence,whose rate might be independent of the regularization constant.This exposition is motivated by recent applications of optimal transport to different domains such as machine learning,image processing,econometrics,astrophysics etc.. 展开更多
关键词 entropical optimal transport Schrödinger system Sinkhorn’s algorithm gradient descent
在线阅读 下载PDF
上一页 1 2 15 下一页 到第
使用帮助 返回顶部