期刊文献+
共找到280,394篇文章
< 1 2 250 >
每页显示 20 50 100
基于时差的多输出tri-training异构软测量建模 被引量:1
1
作者 王大芬 唐莉丽 +3 位作者 张鑫焱 聂春雨 李明珠 吴菁 《化工学报》 北大核心 2025年第3期1143-1155,共13页
软测量技术为工业过程中重要变量及难测变量的预测提供了一个有效的解决办法。然而,由于工业过程的复杂化和高昂的数据获取成本,使得标记数据与未标记数据分布不平衡。此时,构建高性能的软测量模型成为一个挑战。针对这一问题,提出了一... 软测量技术为工业过程中重要变量及难测变量的预测提供了一个有效的解决办法。然而,由于工业过程的复杂化和高昂的数据获取成本,使得标记数据与未标记数据分布不平衡。此时,构建高性能的软测量模型成为一个挑战。针对这一问题,提出了一种基于时差的多输出tri-training异构软测量方法。通过构建一种新的tri-training框架,采用多输出的高斯过程回归(multi-output Gaussian process regression,MGPR)、相关向量机(multi-output relevance vector machine,MRVM)、最小二乘支持向量机(multi-output least squares support vector machine,MLSSVM)三种模型作为基线监督回归器,使用标记数据进行训练和迭代;同时,引入时间差分(time difference,TD)改进模型的动态特性,并通过卡尔曼滤波(Kalman filtering,KF)优化模型的参数,提高其预测性能;最后通过模拟污水处理平台(benchmark simulation model 1,BSM1)和实际污水处理厂对该模型进行了验证。结果表明,与传统的软测量建模方法相比,该模型能显著提高数据分布不平衡下软测量模型的自适应性和预测性能。 展开更多
关键词 tri-training 软测量 时间差分 协同训练 集成 预测 过程控制
在线阅读 下载PDF
基于Tri-training的改进算法在辅助诊断等医学领域的应用进展
2
作者 乔桢 戚红 罗曦 《天津科技》 2025年第9期26-28,32,共4页
Tri-training算法是机器学习算法中的一种半监督学习算法,该算法自提出以来,不断被深入分析,其改进算法也日益丰富。介绍了Tri-training算法的原理及步骤,并以此为基础,对其应用于辅助诊断等医学领域的改进算法做了分类,分别进行系统介... Tri-training算法是机器学习算法中的一种半监督学习算法,该算法自提出以来,不断被深入分析,其改进算法也日益丰富。介绍了Tri-training算法的原理及步骤,并以此为基础,对其应用于辅助诊断等医学领域的改进算法做了分类,分别进行系统介绍分析。对Tri-training改进算法在医学领域的未来发展方向予以展望。 展开更多
关键词 tri-training 半监督学习 辅助诊断 机器学习
在线阅读 下载PDF
基于密度峰值聚类的Tri-training算法 被引量:2
3
作者 罗宇航 吴润秀 +3 位作者 崔志华 张翼英 何业慎 赵嘉 《系统仿真学报》 CAS CSCD 北大核心 2024年第5期1189-1198,共10页
Tri-training利用无标签数据进行分类可有效提高分类器的泛化能力,但其易将无标签数据误标,从而形成训练噪声。提出一种基于密度峰值聚类的Tri-training(Tri-training with density peaks clustering,DPC-TT)算法。密度峰值聚类通过类... Tri-training利用无标签数据进行分类可有效提高分类器的泛化能力,但其易将无标签数据误标,从而形成训练噪声。提出一种基于密度峰值聚类的Tri-training(Tri-training with density peaks clustering,DPC-TT)算法。密度峰值聚类通过类簇中心和局部密度可选出数据空间结构表现较好的样本。DPC-TT算法采用密度峰值聚类算法获取训练数据的类簇中心和样本的局部密度,对类簇中心的截断距离范围内的样本认定为空间结构表现较好,标记为核心数据,使用核心数据更新分类器,可降低迭代过程中的训练噪声,进而提高分类器的性能。实验结果表明:相比于标准Tritraining算法及其改进算法,DPC-TT算法具有更好的分类性能。 展开更多
关键词 tri-training 半监督学习 密度峰值聚类 空间结构 分类器
原文传递
基于Tri-Training半监督学习的非功能性需求分类方法在工业软件中的应用
4
作者 宋百灵 何彦众 +4 位作者 张泽贤 曾诚 俞嘉怡 刘进 胡文华 《武汉大学学报(理学版)》 CAS CSCD 北大核心 2024年第3期367-375,共9页
结合Word2Vec的Skip-gram模型在提取复杂软件需求文档中细微语义差异方面的优势,提出了一种基于Tri-Training半监督学习的非功能性需求分类方法,旨在应对软件需求工程领域中标记样本数量有限的挑战,从而解决非功能性需求分类性能下降的... 结合Word2Vec的Skip-gram模型在提取复杂软件需求文档中细微语义差异方面的优势,提出了一种基于Tri-Training半监督学习的非功能性需求分类方法,旨在应对软件需求工程领域中标记样本数量有限的挑战,从而解决非功能性需求分类性能下降的问题。与传统应用于完全冗余视图或单一分类器的半监督学习算法不同,半监督学习Tri-training算法通过用自举抽样产生的3个不同的标记数据集初始化3个不同的分类器,利用三个分类器以多数投票规则来产生伪标记数据,从而解除对训练集的限制,提高分类框架的通用性和可用性。将本文方法应用于涵盖多个工业领域的PROMISE软件需求数据集中,结果表明,基于Tri-Training半监督学习的非功能性需求分类方法在不同标记比例的数据集上具有良好的分类性能,特别是在标记数据不足的情况下,相比于监督学习和其他半监督学习算法,该方法在召回率和F1值上具有显著优势。 展开更多
关键词 软件需求分类 半监督学习 tri-training
原文传递
基于Tri-training的社交媒体药物不良反应实体抽取
5
作者 何忠玻 严馨 +2 位作者 徐广义 张金鹏 邓忠莹 《计算机工程与应用》 CSCD 北大核心 2024年第3期177-186,共10页
社交媒体因其数据的实时性,对其充分利用可以弥补传统医疗文献药物不良反应中实体抽取的迟滞性问题,但社交媒体文本面临标注数据成本高、数据噪声大等问题,使得模型难以发挥良好的效果。针对社交媒体大量未标注语料存在标注成本高的问题... 社交媒体因其数据的实时性,对其充分利用可以弥补传统医疗文献药物不良反应中实体抽取的迟滞性问题,但社交媒体文本面临标注数据成本高、数据噪声大等问题,使得模型难以发挥良好的效果。针对社交媒体大量未标注语料存在标注成本高的问题,采用Tri-training半监督的方法进行社交媒体药物不良反应实体抽取,通过三个学习器Transformer+CRF、BiLSTM+CRF和IDCNN+CRF对未标注数据进行标注,再利用一致性评价函数迭代地扩展训练集,最后通过加权投票整合模型输出标签。针对社交媒体的文本不正式性(口语化严重、错别字等)问题,通过融合字与词两个粒度的向量作为整个模型嵌入层的输入,来提取更丰富的语义信息。实验结果表明,提出的模型在“好大夫在线”网站获取的数据集上取得了良好表现。 展开更多
关键词 中文社交媒体 药物不良反应 实体抽取 半监督学习 tri-training
在线阅读 下载PDF
基于特征选择与改进的Tri-training的半监督网络流量分类 被引量:2
6
作者 李道全 祝圣凯 +1 位作者 翟豫阳 胡一帆 《计算机工程与应用》 CSCD 北大核心 2024年第23期275-285,共11页
网络流量分类对网络管理意义重大,目前基于机器学习的流量分类方法存在标注瓶颈、样本不平衡的问题。针对这两个问题,提出一种基于特征选择与改进的Tri-training算法结合的半监督网络流量分类模型。根据最大信息系数、皮尔逊系数选择出... 网络流量分类对网络管理意义重大,目前基于机器学习的流量分类方法存在标注瓶颈、样本不平衡的问题。针对这两个问题,提出一种基于特征选择与改进的Tri-training算法结合的半监督网络流量分类模型。根据最大信息系数、皮尔逊系数选择出与类高度相关但彼此不相关的特征,利用改进的Relief F选择出有利于少数类分类的特征,并将选择出的特征组合成最优特征子集缓解不平衡数据对分类的影响。结合集成思想,优化迭代和加权决策改进传统Tri-training算法,利用改进的Tri-training算法解决标注瓶颈问题。在Moore数据集上进行了实验,实验结果表明提出的方法在利用不平衡的少量有标记的数据下在F-measure上达到了95.26%,与先进的机器学习算法和原始Tri-training方法及其一些改进算法相比具有更好的分类性能。 展开更多
关键词 半监督网络 类不平衡 网络流量分类 特征选择 tri-training
在线阅读 下载PDF
基于Tri-training GPR的半监督软测量建模方法
7
作者 马君霞 李林涛 熊伟丽 《化工学报》 EI CSCD 北大核心 2024年第7期2613-2623,共11页
集成学习因通过构建并结合多个学习器,常获得比单一学习器显著优越的泛化能力。但是在标记数据比例较少时,建立高性能的集成学习软测量模型依然是个挑战。针对这一个问题,提出一种基于半监督集成学习的软测量建模方法——Tri-training ... 集成学习因通过构建并结合多个学习器,常获得比单一学习器显著优越的泛化能力。但是在标记数据比例较少时,建立高性能的集成学习软测量模型依然是个挑战。针对这一个问题,提出一种基于半监督集成学习的软测量建模方法——Tri-training GPR模型。该建模策略充分发挥了半监督学习的优势,减轻建模过程对标记样本数据的需求,在低数据标签率下,仍能通过对无标记数据进行筛选从而扩充可用于建模的有标记样本数据集,并进一步结合半监督学习和集成学习的优势,提出一种新的选择高置信度样本的思路。将所提方法应用于青霉素发酵和脱丁烷塔过程,建立青霉素和丁烷浓度预测软测量模型,与传统的建模方法相比获得了更优的预测结果,验证了模型的有效性。 展开更多
关键词 软测量 集成学习 半监督学习 tri-training 高斯过程回归 过程控制 动力学模型 化学过程
在线阅读 下载PDF
基于Tri-training算法的半监督软件缺陷预测模型构建研究 被引量:1
8
作者 柯灵 《佳木斯大学学报(自然科学版)》 CAS 2024年第9期12-15,共4页
针对当前大部分软件缺陷预测模型所存在的预测精度低、数据特征维度高、处理时间长等问题,研究利用半监督学习算法中的Tri-training算法搭建了新的软件缺陷预测模型。研究结果表明,优化后的Tri-training算法能够在训练集中取得0.96的精... 针对当前大部分软件缺陷预测模型所存在的预测精度低、数据特征维度高、处理时间长等问题,研究利用半监督学习算法中的Tri-training算法搭建了新的软件缺陷预测模型。研究结果表明,优化后的Tri-training算法能够在训练集中取得0.96的精度,0.98的召回率以及0.97的F1值,在测试集中取得0.96的精度,0.97的召回率以及0.96的F1值,各项基准性能均优于其他对比算法。此外,研究所设计的缺陷预测模型在实际软件缺陷中的预测准确率高达98.5%,响应时间最短只需要0.85 s,其表现也远优于其他三种对比模型。由此可见,研究所设计的缺陷预测模型具有较好的实际应用效果,能够为软件安全领域的相关工作提供技术支持。 展开更多
关键词 tri-training 缺陷预测 软件 特征 半监督
在线阅读 下载PDF
Method for Estimating the State of Health of Lithium-ion Batteries Based on Differential Thermal Voltammetry and Sparrow Search Algorithm-Elman Neural Network 被引量:1
9
作者 Yu Zhang Daoyu Zhang TiezhouWu 《Energy Engineering》 EI 2025年第1期203-220,共18页
Precisely estimating the state of health(SOH)of lithium-ion batteries is essential for battery management systems(BMS),as it plays a key role in ensuring the safe and reliable operation of battery systems.However,curr... Precisely estimating the state of health(SOH)of lithium-ion batteries is essential for battery management systems(BMS),as it plays a key role in ensuring the safe and reliable operation of battery systems.However,current SOH estimation methods often overlook the valuable temperature information that can effectively characterize battery aging during capacity degradation.Additionally,the Elman neural network,which is commonly employed for SOH estimation,exhibits several drawbacks,including slow training speed,a tendency to become trapped in local minima,and the initialization of weights and thresholds using pseudo-random numbers,leading to unstable model performance.To address these issues,this study addresses the challenge of precise and effective SOH detection by proposing a method for estimating the SOH of lithium-ion batteries based on differential thermal voltammetry(DTV)and an SSA-Elman neural network.Firstly,two health features(HFs)considering temperature factors and battery voltage are extracted fromthe differential thermal voltammetry curves and incremental capacity curves.Next,the Sparrow Search Algorithm(SSA)is employed to optimize the initial weights and thresholds of the Elman neural network,forming the SSA-Elman neural network model.To validate the performance,various neural networks,including the proposed SSA-Elman network,are tested using the Oxford battery aging dataset.The experimental results demonstrate that the method developed in this study achieves superior accuracy and robustness,with a mean absolute error(MAE)of less than 0.9%and a rootmean square error(RMSE)below 1.4%. 展开更多
关键词 Lithium-ion battery state of health differential thermal voltammetry Sparrow Search algorithm
在线阅读 下载PDF
Robustness Optimization Algorithm with Multi-Granularity Integration for Scale-Free Networks Against Malicious Attacks 被引量:1
10
作者 ZHANG Yiheng LI Jinhai 《昆明理工大学学报(自然科学版)》 北大核心 2025年第1期54-71,共18页
Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently... Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently,enhancing the robustness of scale-free networks has become a pressing issue.To address this problem,this paper proposes a Multi-Granularity Integration Algorithm(MGIA),which aims to improve the robustness of scale-free networks while keeping the initial degree of each node unchanged,ensuring network connectivity and avoiding the generation of multiple edges.The algorithm generates a multi-granularity structure from the initial network to be optimized,then uses different optimization strategies to optimize the networks at various granular layers in this structure,and finally realizes the information exchange between different granular layers,thereby further enhancing the optimization effect.We propose new network refresh,crossover,and mutation operators to ensure that the optimized network satisfies the given constraints.Meanwhile,we propose new network similarity and network dissimilarity evaluation metrics to improve the effectiveness of the optimization operators in the algorithm.In the experiments,the MGIA enhances the robustness of the scale-free network by 67.6%.This improvement is approximately 17.2%higher than the optimization effects achieved by eight currently existing complex network robustness optimization algorithms. 展开更多
关键词 complex network model MULTI-GRANULARITY scale-free networks ROBUSTNESS algorithm integration
原文传递
Short-TermWind Power Forecast Based on STL-IAOA-iTransformer Algorithm:A Case Study in Northwest China 被引量:2
11
作者 Zhaowei Yang Bo Yang +5 位作者 Wenqi Liu Miwei Li Jiarong Wang Lin Jiang Yiyan Sang Zhenning Pan 《Energy Engineering》 2025年第2期405-430,共26页
Accurate short-term wind power forecast technique plays a crucial role in maintaining the safety and economic efficiency of smart grids.Although numerous studies have employed various methods to forecast wind power,th... Accurate short-term wind power forecast technique plays a crucial role in maintaining the safety and economic efficiency of smart grids.Although numerous studies have employed various methods to forecast wind power,there remains a research gap in leveraging swarm intelligence algorithms to optimize the hyperparameters of the Transformer model for wind power prediction.To improve the accuracy of short-term wind power forecast,this paper proposes a hybrid short-term wind power forecast approach named STL-IAOA-iTransformer,which is based on seasonal and trend decomposition using LOESS(STL)and iTransformer model optimized by improved arithmetic optimization algorithm(IAOA).First,to fully extract the power data features,STL is used to decompose the original data into components with less redundant information.The extracted components as well as the weather data are then input into iTransformer for short-term wind power forecast.The final predicted short-term wind power curve is obtained by combining the predicted components.To improve the model accuracy,IAOA is employed to optimize the hyperparameters of iTransformer.The proposed approach is validated using real-generation data from different seasons and different power stations inNorthwest China,and ablation experiments have been conducted.Furthermore,to validate the superiority of the proposed approach under different wind characteristics,real power generation data fromsouthwestChina are utilized for experiments.Thecomparative results with the other six state-of-the-art prediction models in experiments show that the proposed model well fits the true value of generation series and achieves high prediction accuracy. 展开更多
关键词 Short-termwind power forecast improved arithmetic optimization algorithm iTransformer algorithm SimuNPS
在线阅读 下载PDF
A LODBO algorithm for multi-UAV search and rescue path planning in disaster areas 被引量:1
12
作者 Liman Yang Xiangyu Zhang +2 位作者 Zhiping Li Lei Li Yan Shi 《Chinese Journal of Aeronautics》 2025年第2期200-213,共14页
In disaster relief operations,multiple UAVs can be used to search for trapped people.In recent years,many researchers have proposed machine le arning-based algorithms,sampling-based algorithms,and heuristic algorithms... In disaster relief operations,multiple UAVs can be used to search for trapped people.In recent years,many researchers have proposed machine le arning-based algorithms,sampling-based algorithms,and heuristic algorithms to solve the problem of multi-UAV path planning.The Dung Beetle Optimization(DBO)algorithm has been widely applied due to its diverse search patterns in the above algorithms.However,the update strategies for the rolling and thieving dung beetles of the DBO algorithm are overly simplistic,potentially leading to an inability to fully explore the search space and a tendency to converge to local optima,thereby not guaranteeing the discovery of the optimal path.To address these issues,we propose an improved DBO algorithm guided by the Landmark Operator(LODBO).Specifically,we first use tent mapping to update the population strategy,which enables the algorithm to generate initial solutions with enhanced diversity within the search space.Second,we expand the search range of the rolling ball dung beetle by using the landmark factor.Finally,by using the adaptive factor that changes with the number of iterations.,we improve the global search ability of the stealing dung beetle,making it more likely to escape from local optima.To verify the effectiveness of the proposed method,extensive simulation experiments are conducted,and the result shows that the LODBO algorithm can obtain the optimal path using the shortest time compared with the Genetic Algorithm(GA),the Gray Wolf Optimizer(GWO),the Whale Optimization Algorithm(WOA)and the original DBO algorithm in the disaster search and rescue task set. 展开更多
关键词 Unmanned aerial vehicle Path planning Meta heuristic algorithm DBO algorithm NP-hard problems
原文传递
Research on Euclidean Algorithm and Reection on Its Teaching
13
作者 ZHANG Shaohua 《应用数学》 北大核心 2025年第1期308-310,共3页
In this paper,we prove that Euclid's algorithm,Bezout's equation and Divi-sion algorithm are equivalent to each other.Our result shows that Euclid has preliminarily established the theory of divisibility and t... In this paper,we prove that Euclid's algorithm,Bezout's equation and Divi-sion algorithm are equivalent to each other.Our result shows that Euclid has preliminarily established the theory of divisibility and the greatest common divisor.We further provided several suggestions for teaching. 展开更多
关键词 Euclid's algorithm Division algorithm Bezout's equation
在线阅读 下载PDF
DDoS Attack Autonomous Detection Model Based on Multi-Strategy Integrate Zebra Optimization Algorithm
14
作者 Chunhui Li Xiaoying Wang +2 位作者 Qingjie Zhang Jiaye Liang Aijing Zhang 《Computers, Materials & Continua》 SCIE EI 2025年第1期645-674,共30页
Previous studies have shown that deep learning is very effective in detecting known attacks.However,when facing unknown attacks,models such as Deep Neural Networks(DNN)combined with Long Short-Term Memory(LSTM),Convol... Previous studies have shown that deep learning is very effective in detecting known attacks.However,when facing unknown attacks,models such as Deep Neural Networks(DNN)combined with Long Short-Term Memory(LSTM),Convolutional Neural Networks(CNN)combined with LSTM,and so on are built by simple stacking,which has the problems of feature loss,low efficiency,and low accuracy.Therefore,this paper proposes an autonomous detectionmodel for Distributed Denial of Service attacks,Multi-Scale Convolutional Neural Network-Bidirectional Gated Recurrent Units-Single Headed Attention(MSCNN-BiGRU-SHA),which is based on a Multistrategy Integrated Zebra Optimization Algorithm(MI-ZOA).The model undergoes training and testing with the CICDDoS2019 dataset,and its performance is evaluated on a new GINKS2023 dataset.The hyperparameters for Conv_filter and GRU_unit are optimized using the Multi-strategy Integrated Zebra Optimization Algorithm(MIZOA).The experimental results show that the test accuracy of the MSCNN-BiGRU-SHA model based on the MIZOA proposed in this paper is as high as 0.9971 in the CICDDoS 2019 dataset.The evaluation accuracy of the new dataset GINKS2023 created in this paper is 0.9386.Compared to the MSCNN-BiGRU-SHA model based on the Zebra Optimization Algorithm(ZOA),the detection accuracy on the GINKS2023 dataset has improved by 5.81%,precisionhas increasedby 1.35%,the recallhas improvedby 9%,and theF1scorehas increasedby 5.55%.Compared to the MSCNN-BiGRU-SHA models developed using Grid Search,Random Search,and Bayesian Optimization,the MSCNN-BiGRU-SHA model optimized with the MI-ZOA exhibits better performance in terms of accuracy,precision,recall,and F1 score. 展开更多
关键词 Distributed denial of service attack intrusion detection deep learning zebra optimization algorithm multi-strategy integrated zebra optimization algorithm
在线阅读 下载PDF
Bearing capacity prediction of open caissons in two-layered clays using five tree-based machine learning algorithms 被引量:1
15
作者 Rungroad Suppakul Kongtawan Sangjinda +3 位作者 Wittaya Jitchaijaroen Natakorn Phuksuksakul Suraparb Keawsawasvong Peem Nuaklong 《Intelligent Geoengineering》 2025年第2期55-65,共11页
Open caissons are widely used in foundation engineering because of their load-bearing efficiency and adaptability in diverse soil conditions.However,accurately predicting their undrained bearing capacity in layered so... Open caissons are widely used in foundation engineering because of their load-bearing efficiency and adaptability in diverse soil conditions.However,accurately predicting their undrained bearing capacity in layered soils remains a complex challenge.This study presents a novel application of five ensemble machine(ML)algorithms-random forest(RF),gradient boosting machine(GBM),extreme gradient boosting(XGBoost),adaptive boosting(AdaBoost),and categorical boosting(CatBoost)-to predict the undrained bearing capacity factor(Nc)of circular open caissons embedded in two-layered clay on the basis of results from finite element limit analysis(FELA).The input dataset consists of 1188 numerical simulations using the Tresca failure criterion,varying in geometrical and soil parameters.The FELA was performed via OptumG2 software with adaptive meshing techniques and verified against existing benchmark studies.The ML models were trained on 70% of the dataset and tested on the remaining 30%.Their performance was evaluated using six statistical metrics:coefficient of determination(R²),mean absolute error(MAE),root mean squared error(RMSE),index of scatter(IOS),RMSE-to-standard deviation ratio(RSR),and variance explained factor(VAF).The results indicate that all the models achieved high accuracy,with R²values exceeding 97.6%and RMSE values below 0.02.Among them,AdaBoost and CatBoost consistently outperformed the other methods across both the training and testing datasets,demonstrating superior generalizability and robustness.The proposed ML framework offers an efficient,accurate,and data-driven alternative to traditional methods for estimating caisson capacity in stratified soils.This approach can aid in reducing computational costs while improving reliability in the early stages of foundation design. 展开更多
关键词 Two-layered clay Open caisson Tree-based algorithms FELA Machine learning
在线阅读 下载PDF
Path Planning for Thermal Power Plant Fan Inspection Robot Based on Improved A^(*)Algorithm 被引量:1
16
作者 Wei Zhang Tingfeng Zhang 《Journal of Electronic Research and Application》 2025年第1期233-239,共7页
To improve the efficiency and accuracy of path planning for fan inspection tasks in thermal power plants,this paper proposes an intelligent inspection robot path planning scheme based on an improved A^(*)algorithm.The... To improve the efficiency and accuracy of path planning for fan inspection tasks in thermal power plants,this paper proposes an intelligent inspection robot path planning scheme based on an improved A^(*)algorithm.The inspection robot utilizes multiple sensors to monitor key parameters of the fans,such as vibration,noise,and bearing temperature,and upload the data to the monitoring center.The robot’s inspection path employs the improved A^(*)algorithm,incorporating obstacle penalty terms,path reconstruction,and smoothing optimization techniques,thereby achieving optimal path planning for the inspection robot in complex environments.Simulation results demonstrate that the improved A^(*)algorithm significantly outperforms the traditional A^(*)algorithm in terms of total path distance,smoothness,and detour rate,effectively improving the execution efficiency of inspection tasks. 展开更多
关键词 Power plant fans Inspection robot Path planning Improved A^(*)algorithm
在线阅读 下载PDF
An Algorithm for Cloud-based Web Service Combination Optimization Through Plant Growth Simulation
17
作者 Li Qiang Qin Huawei +1 位作者 Qiao Bingqin Wu Ruifang 《系统仿真学报》 北大核心 2025年第2期462-473,共12页
In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-base... In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-based web services and the constraints of system resources.Then,a light-induced plant growth simulation algorithm was established.The performance of the algorithm was compared through several plant types,and the best plant model was selected as the setting for the system.Experimental results show that when the number of test cloud-based web services reaches 2048,the model being 2.14 times faster than PSO,2.8 times faster than the ant colony algorithm,2.9 times faster than the bee colony algorithm,and a remarkable 8.38 times faster than the genetic algorithm. 展开更多
关键词 cloud-based service scheduling algorithm resource constraint load optimization cloud computing plant growth simulation algorithm
原文传递
Improved algorithm of multi-mainlobe interference suppression under uncorrelated and coherent conditions 被引量:1
18
作者 CAI Miaohong CHENG Qiang +1 位作者 MENG Jinli ZHAO Dehua 《Journal of Southeast University(English Edition)》 2025年第1期84-90,共7页
A new method based on the iterative adaptive algorithm(IAA)and blocking matrix preprocessing(BMP)is proposed to study the suppression of multi-mainlobe interference.The algorithm is applied to precisely estimate the s... A new method based on the iterative adaptive algorithm(IAA)and blocking matrix preprocessing(BMP)is proposed to study the suppression of multi-mainlobe interference.The algorithm is applied to precisely estimate the spatial spectrum and the directions of arrival(DOA)of interferences to overcome the drawbacks associated with conventional adaptive beamforming(ABF)methods.The mainlobe interferences are identified by calculating the correlation coefficients between direction steering vectors(SVs)and rejected by the BMP pretreatment.Then,IAA is subsequently employed to reconstruct a sidelobe interference-plus-noise covariance matrix for the preferable ABF and residual interference suppression.Simulation results demonstrate the excellence of the proposed method over normal methods based on BMP and eigen-projection matrix perprocessing(EMP)under both uncorrelated and coherent circumstances. 展开更多
关键词 mainlobe interference suppression adaptive beamforming spatial spectral estimation iterative adaptive algorithm blocking matrix preprocessing
在线阅读 下载PDF
Intelligent sequential multi-impulse collision avoidance method for non-cooperative spacecraft based on an improved search tree algorithm 被引量:1
19
作者 Xuyang CAO Xin NING +4 位作者 Zheng WANG Suyi LIU Fei CHENG Wenlong LI Xiaobin LIAN 《Chinese Journal of Aeronautics》 2025年第4期378-393,共16页
The problem of collision avoidance for non-cooperative targets has received significant attention from researchers in recent years.Non-cooperative targets exhibit uncertain states and unpredictable behaviors,making co... The problem of collision avoidance for non-cooperative targets has received significant attention from researchers in recent years.Non-cooperative targets exhibit uncertain states and unpredictable behaviors,making collision avoidance significantly more challenging than that for space debris.Much existing research focuses on the continuous thrust model,whereas the impulsive maneuver model is more appropriate for long-duration and long-distance avoidance missions.Additionally,it is important to minimize the impact on the original mission while avoiding noncooperative targets.On the other hand,the existing avoidance algorithms are computationally complex and time-consuming especially with the limited computing capability of the on-board computer,posing challenges for practical engineering applications.To conquer these difficulties,this paper makes the following key contributions:(A)a turn-based(sequential decision-making)limited-area impulsive collision avoidance model considering the time delay of precision orbit determination is established for the first time;(B)a novel Selection Probability Learning Adaptive Search-depth Search Tree(SPL-ASST)algorithm is proposed for non-cooperative target avoidance,which improves the decision-making efficiency by introducing an adaptive-search-depth mechanism and a neural network into the traditional Monte Carlo Tree Search(MCTS).Numerical simulations confirm the effectiveness and efficiency of the proposed method. 展开更多
关键词 Non-cooperative target Collision avoidance Limited motion area Impulsive maneuver model Search tree algorithm Neural networks
原文传递
A Class of Parallel Algorithm for Solving Low-rank Tensor Completion
20
作者 LIU Tingyan WEN Ruiping 《应用数学》 北大核心 2025年第4期1134-1144,共11页
In this paper,we established a class of parallel algorithm for solving low-rank tensor completion problem.The main idea is that N singular value decompositions are implemented in N different processors for each slice ... In this paper,we established a class of parallel algorithm for solving low-rank tensor completion problem.The main idea is that N singular value decompositions are implemented in N different processors for each slice matrix under unfold operator,and then the fold operator is used to form the next iteration tensor such that the computing time can be decreased.In theory,we analyze the global convergence of the algorithm.In numerical experiment,the simulation data and real image inpainting are carried out.Experiment results show the parallel algorithm outperform its original algorithm in CPU times under the same precision. 展开更多
关键词 Tensor completion Low-rank CONVERGENCE Parallel algorithm
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部