期刊文献+
共找到280,369篇文章
< 1 2 250 >
每页显示 20 50 100
Research on Intelligent Ship Route Planning Based on the Adaptive Step Size Informed-RRT^(*) Algorithm
1
作者 Zhaoqi Liu Jianhui Cui +3 位作者 Fanbin Meng Huawei Xie Yangwen Dan Bin Li 《哈尔滨工程大学学报(英文版)》 2025年第4期829-839,共11页
Advancements in artificial intelligence and big data technologies have led to the gradual emergence of intelligent ships,which are expected to dominate the future of maritime transportation.Supporting the navigation o... Advancements in artificial intelligence and big data technologies have led to the gradual emergence of intelligent ships,which are expected to dominate the future of maritime transportation.Supporting the navigation of intelligent ships,route planning technologies have developed many route planning algorithms that prioritize economy and safety.This paper conducts an in-depth study of algorithm efficiency for a route planning problem,proposing an intelligent ship route planning algorithm based on the adaptive step size Informed-RRT^(*).This algorithm can quickly plan a short route according to automatic obstacle avoidance and is suitable for planning the routes of intelligent ships.Results show that the adaptive step size Informed-RRT^(*) algorithm can shorten the optimal route length by approximately 13.05%while ensuring the running time of the planning algorithm and avoiding approximately 23.64%of redundant sampling nodes.The improved algorithm effectively circumvents unnecessary calculations and reduces a large amount of redundant sampling data,thus improving the efficiency of route planning.In a complex water environment,the unique adaptive step size mechanism enables this algorithm to prevent restricted search tree expansion,showing strong search ability and robustness,which is of practical significance for the development of intelligent ships. 展开更多
关键词 Informed-RRT^(*) Adaptive step size Route planning technology ROBUSTNESS Automatic obstacle avoidance
在线阅读 下载PDF
应用改进APF-Informed-RRT^(*)算法的配送无人机航迹规划 被引量:1
2
作者 刘玉梦 任彦 +3 位作者 王静宇 赵利云 王琦 单俊茹 《中国测试》 北大核心 2025年第1期173-180,共8页
为加快末端物流配送的效率,提出一种配送无人机的航迹规划问题。针对传统快速搜索随机树(rapidlysearch random tree,RRT)算法在航迹规划中存在的盲目性和路径不平滑等问题,将人工势场法(artificial potential field,APF)与Informed-RRT... 为加快末端物流配送的效率,提出一种配送无人机的航迹规划问题。针对传统快速搜索随机树(rapidlysearch random tree,RRT)算法在航迹规划中存在的盲目性和路径不平滑等问题,将人工势场法(artificial potential field,APF)与Informed-RRT^(*)算法融合,提出一种自适应步长增长策略的改进APF-Informed-RRT^(*)算法。首先在选择新节点时,考虑到障碍物和目标点的影响,提出一种自适应步长增长策略来解决采样的盲目性;其次采用三次B样条对拐点处进行平滑处理;最后分别采用RRT^(*)算法、Informed-RRT^(*)算法和改进APF-Informed-RRT^(*)算法在两种环境中进行仿真实验。结果表明,改进APF-Informed-RRT^(*)算法相较于RRT^(*)算法和Informed-RRT^(*)算法,在运行时间、迭代次数以及路径平滑上都得到提升。 展开更多
关键词 末端物流配送 航迹规划 人工势场法 Informed-RRT^(*)算法
在线阅读 下载PDF
融合人工势场和Informed-RRT^(*)算法的机械臂自适应路径规划 被引量:3
3
作者 贾浩铎 房立金 王怀震 《计算机集成制造系统》 北大核心 2025年第4期1179-1189,共11页
针对Informed-RRT^(*)算法存在规划用时长、迭代效率低、动态场景不适用的问题,提出一种融合人工势场和Informed-RRT^(*)算法的机械臂自适应路径规划算法。在路径生长方向上,提出一种概率自适应的目标偏置策略,构造判定区域生成偏置概率... 针对Informed-RRT^(*)算法存在规划用时长、迭代效率低、动态场景不适用的问题,提出一种融合人工势场和Informed-RRT^(*)算法的机械臂自适应路径规划算法。在路径生长方向上,提出一种概率自适应的目标偏置策略,构造判定区域生成偏置概率,结合人工势场约束,限制路径方向选择的随机性;在路径扩展中,提出一种全局自适应步长方法,根据采样点在人工势场中的空间位置调整步长,提高路径探索能力,缩短规划用时;在路径迭代中,采用位置函数引导迭代点生成,高效地进行路径优化迭代;在场景变动后,保留旧树信息,利用人工势场方法进行路径重规划,通过重选目标点跳出局部最优陷阱,增强算法在动态场景的适用性。仿真结果表明,与Informed-RRT^(*)算法相比,所提算法在路径规划速度方面提高51.59%,最优路径长度减少8.03%,在环境变化时具有更强的适应性。 展开更多
关键词 Informed-RRT^(*)算法 人工势场法 路径规划 动态场景
在线阅读 下载PDF
基于海马优化的改进Informed-RRT^(*)的路径规划算法 被引量:1
4
作者 严贵僧 杨洁 《机械传动》 北大核心 2025年第2期93-100,共8页
【目的】为了解决传统Informed-RRT^(*)算法在复杂环境中面临随机性采样、低效搜索和难以提供最优路径等问题,提出了一种基于海马优化(Sea-Horse Optimizer,SHO)的改进Informed-RRT^(*)的路径规划算法。【方法】该算法结合了Informed-RR... 【目的】为了解决传统Informed-RRT^(*)算法在复杂环境中面临随机性采样、低效搜索和难以提供最优路径等问题,提出了一种基于海马优化(Sea-Horse Optimizer,SHO)的改进Informed-RRT^(*)的路径规划算法。【方法】该算法结合了Informed-RRT^(*)和SHO的优势,引入适应度函数,用于评估采样节点的适应性,从而增强对采样目标的引导;此外,采用自适应步长和随机扰动,以适应环境中的障碍物,并选择最佳个体来引导随机树的扩展方向。【结果】通过多组仿真和样机试验对比表明,改进后的Informed-RRT^(*)算法具有更快的收敛速度、更高的搜索效率以及更出色的路径规划性能,为复杂环境中的路径规划提供一种高效的解决方案。 展开更多
关键词 SHO算法 Informed-RRT^(*)算法 路径规划 采样导向性 自主避障
在线阅读 下载PDF
基于改进APF-Informed-RRT^(*)算法的露天运载矿车路径规划研究 被引量:1
5
作者 付有震 廖道争 文斌 《现代电子技术》 北大核心 2025年第19期143-149,共7页
已有的无人矿车路径规划方法存在路线曲率变化范围大且变化频繁的问题,同时规划中较少考虑运输路线的安全性,针对以上问题,对传统的人工势场法进行改进,再将其作为启发式,进一步引导Informed-RRT^(*)算法的随机树生成,提出一种基于改进A... 已有的无人矿车路径规划方法存在路线曲率变化范围大且变化频繁的问题,同时规划中较少考虑运输路线的安全性,针对以上问题,对传统的人工势场法进行改进,再将其作为启发式,进一步引导Informed-RRT^(*)算法的随机树生成,提出一种基于改进APF-Informed-RRT^(*)算法的无人矿车路径规划方法。首先对Informed-RRT^(*)中的超椭球随机采样范围添加安全约束;然后引入虚拟目标区域来解决人工势场法的局部最优解问题,利用“黑洞函数”消除目标不可达问题,在此基础上将改进后的人工势场法作为启发式引入Informed-RRT^(*),并通过递增采样率来减少无效随机树生成;最后采用自适应步长来优化总体路径。仿真结果显示,与APF-RRT^(*)算法和Informed-RRT^(*)算法相比,在保证安全距离的情况下,所规划路线横摆角幅度与曲率变化明显降低。通过真实微缩车辆搭建环境进行矿场运载可行性验证,较APF-Informed-RRT^(*)算法,改进后随机树生成数量减少约47%,规划时间缩短26%,且未出现陷入局部最优的情况,验证了所提方法的有效性。 展开更多
关键词 无人矿车 路径规划 Informed-RRT^(*) 人工势场 自适应步长 超椭球约束 局部最优解 虚拟目标区域
在线阅读 下载PDF
基于改进Informed-RRT^(*)的路径规划算法研究
6
作者 孙馨宇 徐家川 +2 位作者 焦学健 周洋 徐晗 《电子测量技术》 北大核心 2025年第6期73-82,共10页
针对Informed-RRT^(*)算法在路径规划中存在随机性大、无效节点多和收敛效率低等问题,提出了一种改进的Informed-RRT^(*)算法。该算法通过全局采样优化和自适应步长提升节点利用率;采用概率偏置的双向搜索及重选父节点的方法找到初始路... 针对Informed-RRT^(*)算法在路径规划中存在随机性大、无效节点多和收敛效率低等问题,提出了一种改进的Informed-RRT^(*)算法。该算法通过全局采样优化和自适应步长提升节点利用率;采用概率偏置的双向搜索及重选父节点的方法找到初始路径,为后续的迭代优化提供较好的初始值;在进行椭圆迭代时加入贪心策略以减少无用节点,最后对路径回溯优化减少无用节点提升路径的平直度。本文设计障碍物复杂程度和地图尺寸两种变量,对比了改进算法和Informed-RRT^(*)算法在四种场景下的表现,统计20次实验结果,改进算法的路径节点数量减少28.6%~64.3%,路径长度降低0.3%~2.7%。结果表明,与Informed-RRT*算法相比改进算法可以提升节点的利用率,在相同迭代次数下能得到更短的路径并显著降低路径节点数量。 展开更多
关键词 路径规划 栅格地图 改进Informed-RRT^(*)
原文传递
改进Informed-RRT^(*)算法的移动机器人路径规划 被引量:3
7
作者 葛超 张鑫源 +1 位作者 王红 伦志新 《电光与控制》 北大核心 2025年第1期48-53,共6页
针对Informed-RRT^(*)算法初始路径形成缓慢、失败率高及路径质量差的问题,提出基于人工势场法的选点策略。首先,筛选出优质采样点,同时,引入双向直连的贪心策略和动态步长策略,快速获得初始路径并尽快进入遍历寻优阶段;其次,通过新的... 针对Informed-RRT^(*)算法初始路径形成缓慢、失败率高及路径质量差的问题,提出基于人工势场法的选点策略。首先,筛选出优质采样点,同时,引入双向直连的贪心策略和动态步长策略,快速获得初始路径并尽快进入遍历寻优阶段;其次,通过新的采样策略及评价函数,保证规划路径更优;最后,对路径优化处理,所得路径更适合移动机器人的行驶。仿真实验结果表明,改进算法相比于Informed-RRT^(*)算法性能更优,其中,改进算法在不同环境中的成功率均为100%,同时也证明了在限定采样次数下改进算法的收敛速度、路径质量均优于原算法。 展开更多
关键词 移动机器 路径规划 人工势场法 动态步长 路径优化处理 Informed-RRT^(*)
在线阅读 下载PDF
Method for Estimating the State of Health of Lithium-ion Batteries Based on Differential Thermal Voltammetry and Sparrow Search Algorithm-Elman Neural Network 被引量:1
8
作者 Yu Zhang Daoyu Zhang TiezhouWu 《Energy Engineering》 EI 2025年第1期203-220,共18页
Precisely estimating the state of health(SOH)of lithium-ion batteries is essential for battery management systems(BMS),as it plays a key role in ensuring the safe and reliable operation of battery systems.However,curr... Precisely estimating the state of health(SOH)of lithium-ion batteries is essential for battery management systems(BMS),as it plays a key role in ensuring the safe and reliable operation of battery systems.However,current SOH estimation methods often overlook the valuable temperature information that can effectively characterize battery aging during capacity degradation.Additionally,the Elman neural network,which is commonly employed for SOH estimation,exhibits several drawbacks,including slow training speed,a tendency to become trapped in local minima,and the initialization of weights and thresholds using pseudo-random numbers,leading to unstable model performance.To address these issues,this study addresses the challenge of precise and effective SOH detection by proposing a method for estimating the SOH of lithium-ion batteries based on differential thermal voltammetry(DTV)and an SSA-Elman neural network.Firstly,two health features(HFs)considering temperature factors and battery voltage are extracted fromthe differential thermal voltammetry curves and incremental capacity curves.Next,the Sparrow Search Algorithm(SSA)is employed to optimize the initial weights and thresholds of the Elman neural network,forming the SSA-Elman neural network model.To validate the performance,various neural networks,including the proposed SSA-Elman network,are tested using the Oxford battery aging dataset.The experimental results demonstrate that the method developed in this study achieves superior accuracy and robustness,with a mean absolute error(MAE)of less than 0.9%and a rootmean square error(RMSE)below 1.4%. 展开更多
关键词 Lithium-ion battery state of health differential thermal voltammetry Sparrow Search algorithm
在线阅读 下载PDF
Robustness Optimization Algorithm with Multi-Granularity Integration for Scale-Free Networks Against Malicious Attacks 被引量:1
9
作者 ZHANG Yiheng LI Jinhai 《昆明理工大学学报(自然科学版)》 北大核心 2025年第1期54-71,共18页
Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently... Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently,enhancing the robustness of scale-free networks has become a pressing issue.To address this problem,this paper proposes a Multi-Granularity Integration Algorithm(MGIA),which aims to improve the robustness of scale-free networks while keeping the initial degree of each node unchanged,ensuring network connectivity and avoiding the generation of multiple edges.The algorithm generates a multi-granularity structure from the initial network to be optimized,then uses different optimization strategies to optimize the networks at various granular layers in this structure,and finally realizes the information exchange between different granular layers,thereby further enhancing the optimization effect.We propose new network refresh,crossover,and mutation operators to ensure that the optimized network satisfies the given constraints.Meanwhile,we propose new network similarity and network dissimilarity evaluation metrics to improve the effectiveness of the optimization operators in the algorithm.In the experiments,the MGIA enhances the robustness of the scale-free network by 67.6%.This improvement is approximately 17.2%higher than the optimization effects achieved by eight currently existing complex network robustness optimization algorithms. 展开更多
关键词 complex network model MULTI-GRANULARITY scale-free networks ROBUSTNESS algorithm integration
原文传递
Short-TermWind Power Forecast Based on STL-IAOA-iTransformer Algorithm:A Case Study in Northwest China 被引量:2
10
作者 Zhaowei Yang Bo Yang +5 位作者 Wenqi Liu Miwei Li Jiarong Wang Lin Jiang Yiyan Sang Zhenning Pan 《Energy Engineering》 2025年第2期405-430,共26页
Accurate short-term wind power forecast technique plays a crucial role in maintaining the safety and economic efficiency of smart grids.Although numerous studies have employed various methods to forecast wind power,th... Accurate short-term wind power forecast technique plays a crucial role in maintaining the safety and economic efficiency of smart grids.Although numerous studies have employed various methods to forecast wind power,there remains a research gap in leveraging swarm intelligence algorithms to optimize the hyperparameters of the Transformer model for wind power prediction.To improve the accuracy of short-term wind power forecast,this paper proposes a hybrid short-term wind power forecast approach named STL-IAOA-iTransformer,which is based on seasonal and trend decomposition using LOESS(STL)and iTransformer model optimized by improved arithmetic optimization algorithm(IAOA).First,to fully extract the power data features,STL is used to decompose the original data into components with less redundant information.The extracted components as well as the weather data are then input into iTransformer for short-term wind power forecast.The final predicted short-term wind power curve is obtained by combining the predicted components.To improve the model accuracy,IAOA is employed to optimize the hyperparameters of iTransformer.The proposed approach is validated using real-generation data from different seasons and different power stations inNorthwest China,and ablation experiments have been conducted.Furthermore,to validate the superiority of the proposed approach under different wind characteristics,real power generation data fromsouthwestChina are utilized for experiments.Thecomparative results with the other six state-of-the-art prediction models in experiments show that the proposed model well fits the true value of generation series and achieves high prediction accuracy. 展开更多
关键词 Short-termwind power forecast improved arithmetic optimization algorithm iTransformer algorithm SimuNPS
在线阅读 下载PDF
A LODBO algorithm for multi-UAV search and rescue path planning in disaster areas 被引量:1
11
作者 Liman Yang Xiangyu Zhang +2 位作者 Zhiping Li Lei Li Yan Shi 《Chinese Journal of Aeronautics》 2025年第2期200-213,共14页
In disaster relief operations,multiple UAVs can be used to search for trapped people.In recent years,many researchers have proposed machine le arning-based algorithms,sampling-based algorithms,and heuristic algorithms... In disaster relief operations,multiple UAVs can be used to search for trapped people.In recent years,many researchers have proposed machine le arning-based algorithms,sampling-based algorithms,and heuristic algorithms to solve the problem of multi-UAV path planning.The Dung Beetle Optimization(DBO)algorithm has been widely applied due to its diverse search patterns in the above algorithms.However,the update strategies for the rolling and thieving dung beetles of the DBO algorithm are overly simplistic,potentially leading to an inability to fully explore the search space and a tendency to converge to local optima,thereby not guaranteeing the discovery of the optimal path.To address these issues,we propose an improved DBO algorithm guided by the Landmark Operator(LODBO).Specifically,we first use tent mapping to update the population strategy,which enables the algorithm to generate initial solutions with enhanced diversity within the search space.Second,we expand the search range of the rolling ball dung beetle by using the landmark factor.Finally,by using the adaptive factor that changes with the number of iterations.,we improve the global search ability of the stealing dung beetle,making it more likely to escape from local optima.To verify the effectiveness of the proposed method,extensive simulation experiments are conducted,and the result shows that the LODBO algorithm can obtain the optimal path using the shortest time compared with the Genetic Algorithm(GA),the Gray Wolf Optimizer(GWO),the Whale Optimization Algorithm(WOA)and the original DBO algorithm in the disaster search and rescue task set. 展开更多
关键词 Unmanned aerial vehicle Path planning Meta heuristic algorithm DBO algorithm NP-hard problems
原文传递
Research on Euclidean Algorithm and Reection on Its Teaching
12
作者 ZHANG Shaohua 《应用数学》 北大核心 2025年第1期308-310,共3页
In this paper,we prove that Euclid's algorithm,Bezout's equation and Divi-sion algorithm are equivalent to each other.Our result shows that Euclid has preliminarily established the theory of divisibility and t... In this paper,we prove that Euclid's algorithm,Bezout's equation and Divi-sion algorithm are equivalent to each other.Our result shows that Euclid has preliminarily established the theory of divisibility and the greatest common divisor.We further provided several suggestions for teaching. 展开更多
关键词 Euclid's algorithm Division algorithm Bezout's equation
在线阅读 下载PDF
DDoS Attack Autonomous Detection Model Based on Multi-Strategy Integrate Zebra Optimization Algorithm
13
作者 Chunhui Li Xiaoying Wang +2 位作者 Qingjie Zhang Jiaye Liang Aijing Zhang 《Computers, Materials & Continua》 SCIE EI 2025年第1期645-674,共30页
Previous studies have shown that deep learning is very effective in detecting known attacks.However,when facing unknown attacks,models such as Deep Neural Networks(DNN)combined with Long Short-Term Memory(LSTM),Convol... Previous studies have shown that deep learning is very effective in detecting known attacks.However,when facing unknown attacks,models such as Deep Neural Networks(DNN)combined with Long Short-Term Memory(LSTM),Convolutional Neural Networks(CNN)combined with LSTM,and so on are built by simple stacking,which has the problems of feature loss,low efficiency,and low accuracy.Therefore,this paper proposes an autonomous detectionmodel for Distributed Denial of Service attacks,Multi-Scale Convolutional Neural Network-Bidirectional Gated Recurrent Units-Single Headed Attention(MSCNN-BiGRU-SHA),which is based on a Multistrategy Integrated Zebra Optimization Algorithm(MI-ZOA).The model undergoes training and testing with the CICDDoS2019 dataset,and its performance is evaluated on a new GINKS2023 dataset.The hyperparameters for Conv_filter and GRU_unit are optimized using the Multi-strategy Integrated Zebra Optimization Algorithm(MIZOA).The experimental results show that the test accuracy of the MSCNN-BiGRU-SHA model based on the MIZOA proposed in this paper is as high as 0.9971 in the CICDDoS 2019 dataset.The evaluation accuracy of the new dataset GINKS2023 created in this paper is 0.9386.Compared to the MSCNN-BiGRU-SHA model based on the Zebra Optimization Algorithm(ZOA),the detection accuracy on the GINKS2023 dataset has improved by 5.81%,precisionhas increasedby 1.35%,the recallhas improvedby 9%,and theF1scorehas increasedby 5.55%.Compared to the MSCNN-BiGRU-SHA models developed using Grid Search,Random Search,and Bayesian Optimization,the MSCNN-BiGRU-SHA model optimized with the MI-ZOA exhibits better performance in terms of accuracy,precision,recall,and F1 score. 展开更多
关键词 Distributed denial of service attack intrusion detection deep learning zebra optimization algorithm multi-strategy integrated zebra optimization algorithm
在线阅读 下载PDF
Bearing capacity prediction of open caissons in two-layered clays using five tree-based machine learning algorithms 被引量:1
14
作者 Rungroad Suppakul Kongtawan Sangjinda +3 位作者 Wittaya Jitchaijaroen Natakorn Phuksuksakul Suraparb Keawsawasvong Peem Nuaklong 《Intelligent Geoengineering》 2025年第2期55-65,共11页
Open caissons are widely used in foundation engineering because of their load-bearing efficiency and adaptability in diverse soil conditions.However,accurately predicting their undrained bearing capacity in layered so... Open caissons are widely used in foundation engineering because of their load-bearing efficiency and adaptability in diverse soil conditions.However,accurately predicting their undrained bearing capacity in layered soils remains a complex challenge.This study presents a novel application of five ensemble machine(ML)algorithms-random forest(RF),gradient boosting machine(GBM),extreme gradient boosting(XGBoost),adaptive boosting(AdaBoost),and categorical boosting(CatBoost)-to predict the undrained bearing capacity factor(Nc)of circular open caissons embedded in two-layered clay on the basis of results from finite element limit analysis(FELA).The input dataset consists of 1188 numerical simulations using the Tresca failure criterion,varying in geometrical and soil parameters.The FELA was performed via OptumG2 software with adaptive meshing techniques and verified against existing benchmark studies.The ML models were trained on 70% of the dataset and tested on the remaining 30%.Their performance was evaluated using six statistical metrics:coefficient of determination(R²),mean absolute error(MAE),root mean squared error(RMSE),index of scatter(IOS),RMSE-to-standard deviation ratio(RSR),and variance explained factor(VAF).The results indicate that all the models achieved high accuracy,with R²values exceeding 97.6%and RMSE values below 0.02.Among them,AdaBoost and CatBoost consistently outperformed the other methods across both the training and testing datasets,demonstrating superior generalizability and robustness.The proposed ML framework offers an efficient,accurate,and data-driven alternative to traditional methods for estimating caisson capacity in stratified soils.This approach can aid in reducing computational costs while improving reliability in the early stages of foundation design. 展开更多
关键词 Two-layered clay Open caisson Tree-based algorithms FELA Machine learning
在线阅读 下载PDF
Path Planning for Thermal Power Plant Fan Inspection Robot Based on Improved A^(*)Algorithm 被引量:1
15
作者 Wei Zhang Tingfeng Zhang 《Journal of Electronic Research and Application》 2025年第1期233-239,共7页
To improve the efficiency and accuracy of path planning for fan inspection tasks in thermal power plants,this paper proposes an intelligent inspection robot path planning scheme based on an improved A^(*)algorithm.The... To improve the efficiency and accuracy of path planning for fan inspection tasks in thermal power plants,this paper proposes an intelligent inspection robot path planning scheme based on an improved A^(*)algorithm.The inspection robot utilizes multiple sensors to monitor key parameters of the fans,such as vibration,noise,and bearing temperature,and upload the data to the monitoring center.The robot’s inspection path employs the improved A^(*)algorithm,incorporating obstacle penalty terms,path reconstruction,and smoothing optimization techniques,thereby achieving optimal path planning for the inspection robot in complex environments.Simulation results demonstrate that the improved A^(*)algorithm significantly outperforms the traditional A^(*)algorithm in terms of total path distance,smoothness,and detour rate,effectively improving the execution efficiency of inspection tasks. 展开更多
关键词 Power plant fans Inspection robot Path planning Improved A^(*)algorithm
在线阅读 下载PDF
An Algorithm for Cloud-based Web Service Combination Optimization Through Plant Growth Simulation
16
作者 Li Qiang Qin Huawei +1 位作者 Qiao Bingqin Wu Ruifang 《系统仿真学报》 北大核心 2025年第2期462-473,共12页
In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-base... In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-based web services and the constraints of system resources.Then,a light-induced plant growth simulation algorithm was established.The performance of the algorithm was compared through several plant types,and the best plant model was selected as the setting for the system.Experimental results show that when the number of test cloud-based web services reaches 2048,the model being 2.14 times faster than PSO,2.8 times faster than the ant colony algorithm,2.9 times faster than the bee colony algorithm,and a remarkable 8.38 times faster than the genetic algorithm. 展开更多
关键词 cloud-based service scheduling algorithm resource constraint load optimization cloud computing plant growth simulation algorithm
原文传递
Improved algorithm of multi-mainlobe interference suppression under uncorrelated and coherent conditions 被引量:1
17
作者 CAI Miaohong CHENG Qiang +1 位作者 MENG Jinli ZHAO Dehua 《Journal of Southeast University(English Edition)》 2025年第1期84-90,共7页
A new method based on the iterative adaptive algorithm(IAA)and blocking matrix preprocessing(BMP)is proposed to study the suppression of multi-mainlobe interference.The algorithm is applied to precisely estimate the s... A new method based on the iterative adaptive algorithm(IAA)and blocking matrix preprocessing(BMP)is proposed to study the suppression of multi-mainlobe interference.The algorithm is applied to precisely estimate the spatial spectrum and the directions of arrival(DOA)of interferences to overcome the drawbacks associated with conventional adaptive beamforming(ABF)methods.The mainlobe interferences are identified by calculating the correlation coefficients between direction steering vectors(SVs)and rejected by the BMP pretreatment.Then,IAA is subsequently employed to reconstruct a sidelobe interference-plus-noise covariance matrix for the preferable ABF and residual interference suppression.Simulation results demonstrate the excellence of the proposed method over normal methods based on BMP and eigen-projection matrix perprocessing(EMP)under both uncorrelated and coherent circumstances. 展开更多
关键词 mainlobe interference suppression adaptive beamforming spatial spectral estimation iterative adaptive algorithm blocking matrix preprocessing
在线阅读 下载PDF
Intelligent sequential multi-impulse collision avoidance method for non-cooperative spacecraft based on an improved search tree algorithm 被引量:1
18
作者 Xuyang CAO Xin NING +4 位作者 Zheng WANG Suyi LIU Fei CHENG Wenlong LI Xiaobin LIAN 《Chinese Journal of Aeronautics》 2025年第4期378-393,共16页
The problem of collision avoidance for non-cooperative targets has received significant attention from researchers in recent years.Non-cooperative targets exhibit uncertain states and unpredictable behaviors,making co... The problem of collision avoidance for non-cooperative targets has received significant attention from researchers in recent years.Non-cooperative targets exhibit uncertain states and unpredictable behaviors,making collision avoidance significantly more challenging than that for space debris.Much existing research focuses on the continuous thrust model,whereas the impulsive maneuver model is more appropriate for long-duration and long-distance avoidance missions.Additionally,it is important to minimize the impact on the original mission while avoiding noncooperative targets.On the other hand,the existing avoidance algorithms are computationally complex and time-consuming especially with the limited computing capability of the on-board computer,posing challenges for practical engineering applications.To conquer these difficulties,this paper makes the following key contributions:(A)a turn-based(sequential decision-making)limited-area impulsive collision avoidance model considering the time delay of precision orbit determination is established for the first time;(B)a novel Selection Probability Learning Adaptive Search-depth Search Tree(SPL-ASST)algorithm is proposed for non-cooperative target avoidance,which improves the decision-making efficiency by introducing an adaptive-search-depth mechanism and a neural network into the traditional Monte Carlo Tree Search(MCTS).Numerical simulations confirm the effectiveness and efficiency of the proposed method. 展开更多
关键词 Non-cooperative target Collision avoidance Limited motion area Impulsive maneuver model Search tree algorithm Neural networks
原文传递
A Class of Parallel Algorithm for Solving Low-rank Tensor Completion
19
作者 LIU Tingyan WEN Ruiping 《应用数学》 北大核心 2025年第4期1134-1144,共11页
In this paper,we established a class of parallel algorithm for solving low-rank tensor completion problem.The main idea is that N singular value decompositions are implemented in N different processors for each slice ... In this paper,we established a class of parallel algorithm for solving low-rank tensor completion problem.The main idea is that N singular value decompositions are implemented in N different processors for each slice matrix under unfold operator,and then the fold operator is used to form the next iteration tensor such that the computing time can be decreased.In theory,we analyze the global convergence of the algorithm.In numerical experiment,the simulation data and real image inpainting are carried out.Experiment results show the parallel algorithm outperform its original algorithm in CPU times under the same precision. 展开更多
关键词 Tensor completion Low-rank CONVERGENCE Parallel algorithm
在线阅读 下载PDF
An Iterated Greedy Algorithm with Memory and Learning Mechanisms for the Distributed Permutation Flow Shop Scheduling Problem
20
作者 Binhui Wang Hongfeng Wang 《Computers, Materials & Continua》 SCIE EI 2025年第1期371-388,共18页
The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because o... The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because of its straightforward,single-solution evolution framework.However,a potential draw-back of IGA is the lack of utilization of historical information,which could lead to an imbalance between exploration and exploitation,especially in large-scale DPFSPs.As a consequence,this paper develops an IGA with memory and learning mechanisms(MLIGA)to efficiently solve the DPFSP targeted at the mini-malmakespan.InMLIGA,we incorporate a memory mechanism to make a more informed selection of the initial solution at each stage of the search,by extending,reconstructing,and reinforcing the information from previous solutions.In addition,we design a twolayer cooperative reinforcement learning approach to intelligently determine the key parameters of IGA and the operations of the memory mechanism.Meanwhile,to ensure that the experience generated by each perturbation operator is fully learned and to reduce the prior parameters of MLIGA,a probability curve-based acceptance criterion is proposed by combining a cube root function with custom rules.At last,a discrete adaptive learning rate is employed to enhance the stability of the memory and learningmechanisms.Complete ablation experiments are utilized to verify the effectiveness of the memory mechanism,and the results show that this mechanism is capable of improving the performance of IGA to a large extent.Furthermore,through comparative experiments involving MLIGA and five state-of-the-art algorithms on 720 benchmarks,we have discovered that MLI-GA demonstrates significant potential for solving large-scale DPFSPs.This indicates that MLIGA is well-suited for real-world distributed flow shop scheduling. 展开更多
关键词 Distributed permutation flow shop scheduling MAKESPAN iterated greedy algorithm memory mechanism cooperative reinforcement learning
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部