期刊文献+
共找到293篇文章
< 1 2 15 >
每页显示 20 50 100
A novel approach to identify the spatial characteristics of ozone-precursor sensitivity based on interpretable machine learning
1
作者 Huiling He Kaihui Zhao +6 位作者 Zibing Yuan Jin Shen Yujun Lin Shu Zhang Menglei Wang Anqi Wang Puyu Lian 《Journal of Environmental Sciences》 2026年第1期54-63,共10页
To curb the worsening tropospheric ozone(O_(3))pollution problem in China,a rapid and accurate identification of O_(3)-precursor sensitivity(OPS)is a crucial prerequisite for formulating effective contingency O_(3) po... To curb the worsening tropospheric ozone(O_(3))pollution problem in China,a rapid and accurate identification of O_(3)-precursor sensitivity(OPS)is a crucial prerequisite for formulating effective contingency O_(3) pollution control strategies.However,currently widely-used methods,such as statistical models and numerical models,exhibit inherent limitations in identifying OPS in a timely and accurate manner.In this study,we developed a novel approach to identify OPS based on eXtreme Gradient Boosting model,Shapley additive explanation(SHAP)al-gorithm,and volatile organic compound(VOC)photochemical decay adjustment,using the meteorology and speciated pollutant monitoring data as the input.By comparing the difference in SHAP values between base sce-nario and precursor reduction scenario for nitrogen oxides(NO_(x))and VOCs,OPS was divided into NO_(x)-limited,VOCs-limited and transition regime.Using the long-lasting O_(3) pollution episode in the autumn of 2022 at the Guangdong-Hong Kong-Macao Greater Bay Area(GBA)as an example,we demonstrated large spatiotemporal heterogeneities of OPS over the GBA,which were generally shifted from NO_(x)-limited to VOCs-limited from September to October and more inclined to be VOCs-limited at the central and NO_(x)-limited in the peripheral areas.This study developed an innovative OPS identification method by comparing the difference in SHAP value before and after precursor emission reduction.Our method enables the accurate identification of OPS in the time scale of seconds,thereby providing a state-of-the-art tool for the rapid guidance of spatial-specific O_(3) control strategies. 展开更多
关键词 O_(3)-precursor sensitivity Machine learning Extreme gradient boosting model Shapley algorithm Greater bay area
原文传递
Gradient Optimizer Algorithm with Hybrid Deep Learning Based Failure Detection and Classification in the Industrial Environment 被引量:1
2
作者 Mohamed Zarouan Ibrahim M.Mehedi +1 位作者 Shaikh Abdul Latif Md.Masud Rana 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第2期1341-1364,共24页
Failure detection is an essential task in industrial systems for preventing costly downtime and ensuring the seamlessoperation of the system. Current industrial processes are getting smarter with the emergence of Indu... Failure detection is an essential task in industrial systems for preventing costly downtime and ensuring the seamlessoperation of the system. Current industrial processes are getting smarter with the emergence of Industry 4.0.Specifically, various modernized industrial processes have been equipped with quite a few sensors to collectprocess-based data to find faults arising or prevailing in processes along with monitoring the status of processes.Fault diagnosis of rotating machines serves a main role in the engineering field and industrial production. Dueto the disadvantages of existing fault, diagnosis approaches, which greatly depend on professional experienceand human knowledge, intellectual fault diagnosis based on deep learning (DL) has attracted the researcher’sinterest. DL reaches the desired fault classification and automatic feature learning. Therefore, this article designs a Gradient Optimizer Algorithm with Hybrid Deep Learning-based Failure Detection and Classification (GOAHDLFDC)in the industrial environment. The presented GOAHDL-FDC technique initially applies continuous wavelettransform (CWT) for preprocessing the actual vibrational signals of the rotating machinery. Next, the residualnetwork (ResNet18) model was exploited for the extraction of features from the vibration signals which are thenfed into theHDLmodel for automated fault detection. Finally, theGOA-based hyperparameter tuning is performedtoadjust the parameter valuesof theHDLmodel accurately.The experimental result analysis of the GOAHDL-FD Calgorithm takes place using a series of simulations and the experimentation outcomes highlight the better resultsof the GOAHDL-FDC technique under different aspects. 展开更多
关键词 Fault detection Industry 4.0 gradient optimizer algorithm deep learning rotating machineries artificial intelligence
在线阅读 下载PDF
Efficient and High-quality Recommendations via Momentum-incorporated Parallel Stochastic Gradient Descent-Based Learning 被引量:7
3
作者 Xin Luo Wen Qin +2 位作者 Ani Dong Khaled Sedraoui MengChu Zhou 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第2期402-411,共10页
A recommender system(RS)relying on latent factor analysis usually adopts stochastic gradient descent(SGD)as its learning algorithm.However,owing to its serial mechanism,an SGD algorithm suffers from low efficiency and... A recommender system(RS)relying on latent factor analysis usually adopts stochastic gradient descent(SGD)as its learning algorithm.However,owing to its serial mechanism,an SGD algorithm suffers from low efficiency and scalability when handling large-scale industrial problems.Aiming at addressing this issue,this study proposes a momentum-incorporated parallel stochastic gradient descent(MPSGD)algorithm,whose main idea is two-fold:a)implementing parallelization via a novel datasplitting strategy,and b)accelerating convergence rate by integrating momentum effects into its training process.With it,an MPSGD-based latent factor(MLF)model is achieved,which is capable of performing efficient and high-quality recommendations.Experimental results on four high-dimensional and sparse matrices generated by industrial RS indicate that owing to an MPSGD algorithm,an MLF model outperforms the existing state-of-the-art ones in both computational efficiency and scalability. 展开更多
关键词 Big data industrial application industrial data latent factor analysis machine learning parallel algorithm recommender system(RS) stochastic gradient descent(SGD)
在线阅读 下载PDF
Data-Driven Learning Control Algorithms for Unachievable Tracking Problems 被引量:3
4
作者 Zeyi Zhang Hao Jiang +1 位作者 Dong Shen Samer S.Saab 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第1期205-218,共14页
For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to in... For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to investigate solutions using the Ptype learning control scheme. Initially, we demonstrate the necessity of gradient information for achieving the best approximation.Subsequently, we propose an input-output-driven learning gain design to handle the imprecise gradients of a class of uncertain systems. However, it is discovered that the desired performance may not be attainable when faced with incomplete information.To address this issue, an extended iterative learning control scheme is introduced. In this scheme, the tracking errors are modified through output data sampling, which incorporates lowmemory footprints and offers flexibility in learning gain design.The input sequence is shown to converge towards the desired input, resulting in an output that is closest to the given reference in the least square sense. Numerical simulations are provided to validate the theoretical findings. 展开更多
关键词 Data-driven algorithms incomplete information iterative learning control gradient information unachievable problems
在线阅读 下载PDF
Distributed Byzantine-Resilient Learning of Multi-UAV Systems via Filter-Based Centerpoint Aggregation Rules
5
作者 Yukang Cui Linzhen Cheng +1 位作者 Michael Basin Zongze Wu 《IEEE/CAA Journal of Automatica Sinica》 2025年第5期1056-1058,共3页
Dear Editor,Through distributed machine learning,multi-UAV systems can achieve global optimization goals without a centralized server,such as optimal target tracking,by leveraging local calculation and communication w... Dear Editor,Through distributed machine learning,multi-UAV systems can achieve global optimization goals without a centralized server,such as optimal target tracking,by leveraging local calculation and communication with neighbors.In this work,we implement the stochastic gradient descent algorithm(SGD)distributedly to optimize tracking errors based on local state and aggregation of the neighbors'estimation.However,Byzantine agents can mislead neighbors,causing deviations from optimal tracking.We prove that the swarm achieves resilient convergence if aggregated results lie within the normal neighbors'convex hull,which can be guaranteed by the introduced centerpoint-based aggregation rule.In the given simulated scenarios,distributed learning using average,geometric median(GM),and coordinate-wise median(CM)based aggregation rules fail to track the target.Compared to solely using the centerpoint aggregation method,our approach,which combines a pre-filter with the centroid aggregation rule,significantly enhances resilience against Byzantine attacks,achieving faster convergence and smaller tracking errors. 展开更多
关键词 global optimization goals multi UAV systems filter based centerpoint aggregation distributed learning optimal target trackingby stochastic gradient descent algorithm sgd distributedly optimize tracking distributed machine learningmulti uav
在线阅读 下载PDF
Memetic algorithms-based neural network learning for basic oxygen furnace endpoint prediction
6
作者 Peng CHEN Yong-zai LU 《Journal of Zhejiang University-Science A(Applied Physics & Engineering)》 SCIE EI CAS CSCD 2010年第11期841-848,共8页
Based on the critical position of the endpoint quality prediction for basic oxygen furnaces (BOFs) in steelmaking, and the latest results in computational intelligence (C1), this paper deals with the development ... Based on the critical position of the endpoint quality prediction for basic oxygen furnaces (BOFs) in steelmaking, and the latest results in computational intelligence (C1), this paper deals with the development of a novel memetic algorithm (MA) for neural network (NN) lcarnmg. Included in this is the integration of extremal optimization (EO) and Levenberg-Marquardt (LM) pradicnt search, and its application in BOF endpoint quality prediction. The fundamental analysis reveals that the proposed EO-LM algorithm may provide superior performance in generalization, computation efficiency, and avoid local minima, compared to traditional NN learning methods. Experimental results with production-scale BOF data show that the proposed method can effectively improve the NN model for BOF endpoint quality prediction. 展开更多
关键词 Memetic algorithm (MA) Neural network (NN) learning Back propagation (BP) Extremal optimization (EO) gevenberg-Marquardt (LM) gradient search Basic oxygen furnace (BOF)
原文传递
Chimp Optimization Algorithm Based Feature Selection with Machine Learning for Medical Data Classification
7
作者 Firas Abedi Hayder M.A.Ghanimi +6 位作者 Abeer D.Algarni Naglaa F.Soliman Walid El-Shafai Ali Hashim Abbas Zahraa H.Kareem Hussein Muhi Hariz Ahmed Alkhayyat 《Computer Systems Science & Engineering》 SCIE EI 2023年第12期2791-2814,共24页
Datamining plays a crucial role in extractingmeaningful knowledge fromlarge-scale data repositories,such as data warehouses and databases.Association rule mining,a fundamental process in data mining,involves discoveri... Datamining plays a crucial role in extractingmeaningful knowledge fromlarge-scale data repositories,such as data warehouses and databases.Association rule mining,a fundamental process in data mining,involves discovering correlations,patterns,and causal structures within datasets.In the healthcare domain,association rules offer valuable opportunities for building knowledge bases,enabling intelligent diagnoses,and extracting invaluable information rapidly.This paper presents a novel approach called the Machine Learning based Association Rule Mining and Classification for Healthcare Data Management System(MLARMC-HDMS).The MLARMC-HDMS technique integrates classification and association rule mining(ARM)processes.Initially,the chimp optimization algorithm-based feature selection(COAFS)technique is employed within MLARMC-HDMS to select relevant attributes.Inspired by the foraging behavior of chimpanzees,the COA algorithm mimics their search strategy for food.Subsequently,the classification process utilizes stochastic gradient descent with a multilayer perceptron(SGD-MLP)model,while the Apriori algorithm determines attribute relationships.We propose a COA-based feature selection approach for medical data classification using machine learning techniques.This approach involves selecting pertinent features from medical datasets through COA and training machine learning models using the reduced feature set.We evaluate the performance of our approach on various medical datasets employing diverse machine learning classifiers.Experimental results demonstrate that our proposed approach surpasses alternative feature selection methods,achieving higher accuracy and precision rates in medical data classification tasks.The study showcases the effectiveness and efficiency of the COA-based feature selection approach in identifying relevant features,thereby enhancing the diagnosis and treatment of various diseases.To provide further validation,we conduct detailed experiments on a benchmark medical dataset,revealing the superiority of the MLARMCHDMS model over other methods,with a maximum accuracy of 99.75%.Therefore,this research contributes to the advancement of feature selection techniques in medical data classification and highlights the potential for improving healthcare outcomes through accurate and efficient data analysis.The presented MLARMC-HDMS framework and COA-based feature selection approach offer valuable insights for researchers and practitioners working in the field of healthcare data mining and machine learning. 展开更多
关键词 Association rule mining data classification healthcare data machine learning parameter tuning data mining feature selection MLARMC-HDMS COA stochastic gradient descent Apriori algorithm
在线阅读 下载PDF
基于生成模型的Q-learning二分类算法 被引量:1
8
作者 尚志刚 徐若灏 +2 位作者 乔康加 杨莉芳 李蒙蒙 《计算机应用研究》 CSCD 北大核心 2020年第11期3326-3329,3333,共5页
对于二分类问题,基于判别模型的分类器一般都是寻找一条最优判决边界,容易受到数据波动的影响。针对该问题提出一种基于生成模型的Q-learning二分类算法(BGQ-learning),将状态和动作分开编码,得到对应各类的判决函数,增加了决策空间的... 对于二分类问题,基于判别模型的分类器一般都是寻找一条最优判决边界,容易受到数据波动的影响。针对该问题提出一种基于生成模型的Q-learning二分类算法(BGQ-learning),将状态和动作分开编码,得到对应各类的判决函数,增加了决策空间的灵活性,同时在求解参数时,采用最小二乘时序差分(TD)算法和半梯度下降法的组合优化方法,加速了参数的收敛速度。设计实验对比了BGQ-learning算法与三种经典分类器以及一种新颖的分类器的分类性能,在UCI数据库七个数据集上的测试结果表明,该算法有着优良的稳定性以及良好的分类精确度。 展开更多
关键词 Q-learning 生成模型 二分类 最小二乘时序差分算法 半梯度下降法
在线阅读 下载PDF
Machine learning-based prediction of soil compression modulus with application of ID settlement 被引量:16
9
作者 Dong-ming ZHANG Jin-zhang ZHANG +2 位作者 Hong-wei HUANG Chong-chong QI Chen-yu CHANG 《Journal of Zhejiang University-Science A(Applied Physics & Engineering)》 SCIE EI CAS CSCD 2020年第6期430-444,共15页
The compression modulus(Es)is one of the most significant soil parameters that affects the compressive deformation of geotechnical systems,such as foundations.However,it is difficult and sometime costly to obtain this... The compression modulus(Es)is one of the most significant soil parameters that affects the compressive deformation of geotechnical systems,such as foundations.However,it is difficult and sometime costly to obtain this parameter in engineering practice.In this study,we aimed to develop a non-parametric ensemble artificial intelligence(AI)approach to calculate the Es of soft clay in contrast to the traditional regression models proposed in previous studies.A gradient boosted regression tree(GBRT)algorithm was used to discern the non-linear pattern between input variables and the target response,while a genetic algorithm(GA)was adopted for tuning the GBRT model's hyper-parameters.The model was tested through 10-fold cross validation.A dataset of 221 samples from 65 engineering survey reports from Shanghai infrastructure projects was constructed to evaluate the accuracy of the new model5 s predictions.The mean squared error and correlation coefficient of the optimum GBRT model applied to the testing set were 0.13 and 0.91,respectively,indicating that the proposed machine learning(ML)model has great potential to improve the prediction of Es for soft clay.A comparison of the performance of empirical formulas and the proposed ML method for predicting foundation settlement indicated the rationality of the proposed ML model and its applicability to the compressive deformation of geotechnical systems.This model,however,cannot be directly applied to the prediction of Es in other sites due to its site specificity.This problem can be solved by retraining the model using local data.This study provides a useful reference for future multi-parameter prediction of soil behavior. 展开更多
关键词 Compression modulus prediction Machine learning(ML) gradient boosted regression tree(GBRT) Genetic algorithm(GA) Foundation settlement
原文传递
Adaptive Error Curve Learning Ensemble Model for Improving Energy Consumption Forecasting 被引量:1
10
作者 Prince Waqas Khan Yung-Cheol Byun 《Computers, Materials & Continua》 SCIE EI 2021年第11期1893-1913,共21页
Despite the advancement within the last decades in the field of smart grids,energy consumption forecasting utilizing the metrological features is still challenging.This paper proposes a genetic algorithm-based adaptiv... Despite the advancement within the last decades in the field of smart grids,energy consumption forecasting utilizing the metrological features is still challenging.This paper proposes a genetic algorithm-based adaptive error curve learning ensemble(GA-ECLE)model.The proposed technique copes with the stochastic variations of improving energy consumption forecasting using a machine learning-based ensembled approach.A modified ensemble model based on a utilizing error of model as a feature is used to improve the forecast accuracy.This approach combines three models,namely CatBoost(CB),Gradient Boost(GB),and Multilayer Perceptron(MLP).The ensembled CB-GB-MLP model’s inner mechanism consists of generating a meta-data from Gradient Boosting and CatBoost models to compute the final predictions using the Multilayer Perceptron network.A genetic algorithm is used to obtain the optimal features to be used for the model.To prove the proposed model’s effectiveness,we have used a four-phase technique using Jeju island’s real energy consumption data.In the first phase,we have obtained the results by applying the CB-GB-MLP model.In the second phase,we have utilized a GA-ensembled model with optimal features.The third phase is for the comparison of the energy forecasting result with the proposed ECL-based model.The fourth stage is the final stage,where we have applied the GA-ECLE model.We obtained a mean absolute error of 3.05,and a root mean square error of 5.05.Extensive experimental results are provided,demonstrating the superiority of the proposed GA-ECLE model over traditional ensemble models. 展开更多
关键词 Energy consumption meteorological features error curve learning ensemble model energy forecasting gradient boost catboost multilayer perceptron genetic algorithm
在线阅读 下载PDF
Personalized movie recommendation method based on ensemble learning
11
作者 YANG Kun DUAN Yong 《High Technology Letters》 EI CAS 2022年第1期56-62,共7页
Aiming at the personalized movie recommendation problem,a recommendation algorithm in-tegrating manifold learning and ensemble learning is studied.In this work,manifold learning is used to reduce the dimension of data... Aiming at the personalized movie recommendation problem,a recommendation algorithm in-tegrating manifold learning and ensemble learning is studied.In this work,manifold learning is used to reduce the dimension of data so that both time and space complexities of the model are mitigated.Meanwhile,gradient boosting decision tree(GBDT)is used to train the target user profile prediction model.Based on the recommendation results,Bayesian optimization algorithm is applied to optimize the recommendation model,which can effectively improve the prediction accuracy.The experimental results show that the proposed algorithm can improve the accuracy of movie recommendation. 展开更多
关键词 gradient boosting decision tree(GBDT) recommendation algorithm manifold learn-ing ensemble learning Bayesian optimization
在线阅读 下载PDF
基于改进Adam算法的胃肠镜图像分类方法
12
作者 孙海静 崔佳琪 +3 位作者 邵一川 赵骞 张乐 李刚 《沈阳大学学报(自然科学版)》 2026年第1期53-60,90,共9页
提出一种针对胃肠镜图像分类任务优化的改进Adam算法。该算法通过引入学习率衰减和自适应梯度正则化,有效提升了模型在胃肠镜图像上的分类性能和收敛速度。学习率衰减根据梯度变化调节学习率,以加快收敛并减少振荡;自适应梯度正则化能... 提出一种针对胃肠镜图像分类任务优化的改进Adam算法。该算法通过引入学习率衰减和自适应梯度正则化,有效提升了模型在胃肠镜图像上的分类性能和收敛速度。学习率衰减根据梯度变化调节学习率,以加快收敛并减少振荡;自适应梯度正则化能够减少过拟合,提高泛化能力。为验证改进后算法的有效性,在公开的Kvasir数据集上进行了实验,取得了67.67%的准确率,与Adam、SGD、AdamW等算法相比有所提高。 展开更多
关键词 深度学习 改进Adam算法 学习率衰减 自适应梯度正则化 胃肠镜图像分类
在线阅读 下载PDF
基于双重决策机制的深度符号回归算法
13
作者 郭泽一 李凤莲 徐利春 《计算机应用》 北大核心 2026年第2期406-415,共10页
深度符号回归(DSR)算法由循环神经网络(RNN)自动化生成表达式树,进而获得较高的模型性能,然而,它无法兼顾表达式树的准确性和结构的简洁性。因此,提出一种基于双重决策机制的深度符号回归(DDSR)算法。首先,在RNN初步决策的基础上,利用... 深度符号回归(DSR)算法由循环神经网络(RNN)自动化生成表达式树,进而获得较高的模型性能,然而,它无法兼顾表达式树的准确性和结构的简洁性。因此,提出一种基于双重决策机制的深度符号回归(DDSR)算法。首先,在RNN初步决策的基础上,利用双评分机制综合评估表达式树的结构简洁性和准确性。其次,采用强化学习对表达式树生成进行训练,将表达式树生成视为序列决策过程,并利用风险近端策略优化(RPPO)算法进行奖励反馈以更新下一批次的模型参数。在公共数据集上的实验结果表明,相较于DSR算法,DDSR算法在拟合度相关系数上最多提高了0.396,最少提高了0.001,而整体性能提升了0.116。以上证明了DDSR算法的有效性。 展开更多
关键词 符号回归 深度学习 评分机制 近端策略优化算法 风险寻优策略梯度
在线阅读 下载PDF
融合光学和声学特征的岛礁周边海底底质GA-XGBoost分类方法
14
作者 张玉洁 李杰 +3 位作者 李宁宁 刘晓瑜 唐秋华 张靖宇 《海洋科学进展》 北大核心 2026年第1期111-124,共14页
海底底质类型的精确识别对了解底栖海洋群落的分布和规划海洋资源可持续开发至关重要,机器学习算法是识别底质类型的有效手段。针对岛礁单一声学数据底质分类局限性,融合多光谱遥感数据为解决该局限性提供了新思路。本研究提出了一种融... 海底底质类型的精确识别对了解底栖海洋群落的分布和规划海洋资源可持续开发至关重要,机器学习算法是识别底质类型的有效手段。针对岛礁单一声学数据底质分类局限性,融合多光谱遥感数据为解决该局限性提供了新思路。本研究提出了一种融合多光谱遥感数据和多波束数据、基于特征选择和遗传算法——极限梯度提升算法(Genetic Algorithm-Extreme Gradient Boosting, GA-XGBoost)的多源数据海底底质分类方法。首先对WorldView-2多光谱数据和多波束数据进行预处理,统一地理坐标系统并进行空间分辨率配准;然后提取多光谱影像的光谱特征、测深数据的地形特征及反向散射强度纹理特征,组成18维特征参数,基于XGBoost(Extreme Gradient Boosting)算法结合向前逐步特征选择从18维特征中选出12维最优特征子集;之后构建GA-XGBoost分类模型,分别使用单一数据源及多源数据训练和测试模型,与BPNN(Back Propagation Neural Network)、 GA-BP(Genetic Algorithm-Back Propagation Neural Network)和XGBoost分类算法的精度对比分析;最后,应用最优的GA-XGBoost模型对整个研究区底质进行分类并可视化。实验结果显示,该方法在海底底质分类中的总体精度达91.23%,Kappa系数为0.87,F1分数为0.911 8,显著优于单一数据源输入及对比算法,表明GA-XGBoost模型为海底底质快速、准确分类的一种新的有效解决方案。 展开更多
关键词 海底底质分类 多源数据 遗传算法 XGBoost 机器学习
在线阅读 下载PDF
基于深度强化学习的多无人车协同路径规划方法
15
作者 戴晟潭 王寅 尚晨晨 《北京航空航天大学学报》 北大核心 2026年第2期541-550,共10页
为解决多无人车系统中的协同路径规划问题,利用深度强化学习方法,设计了一种高效的路径规划框架。构建基于双轮差速无人车的运动学模型和协同避障场景的数学模型;在此基础上,进一步分析深度强化学习在处理高维度状态空间和连续动作空间... 为解决多无人车系统中的协同路径规划问题,利用深度强化学习方法,设计了一种高效的路径规划框架。构建基于双轮差速无人车的运动学模型和协同避障场景的数学模型;在此基础上,进一步分析深度强化学习在处理高维度状态空间和连续动作空间等复杂动态场景时训练速度慢、采样效率低和适应能力差的机理,为多无人车协同路径规划研究提供理论基础。针对全部可观测条件下多无人车协同路径规划避障围捕的策略生成问题,提出改进双延迟深度确定性策略梯度(AE-TD3)算法,在围捕无人车输出的动作上添加来自高斯分布的随机噪声,并权衡探索或利用输出动作,使围捕无人车在未知环境中能更有效地探索,实现多无人车高效稳定的协同避障围捕。仿真实验表明,改进算法相较于双延迟深度确定性策略梯度(TD3)算法,平均奖励的收敛速度更快,围捕时间缩短16.7%,验证了改进算法的可行性。 展开更多
关键词 路径规划 协同避障和围捕 深度强化学习 双延迟深度确定性策略梯度算法 动作增强探索策略
原文传递
不同训练算法下光子神经网络鲁棒性能研究
16
作者 陆鸣豪 陆云清 +3 位作者 曹雯 刘美玉 邵晓锋 王瑾 《自动化技术与应用》 2026年第1期17-21,共5页
优化了训练算法和学习率组合以提高光子神经网络(optical neural network,ONN)对器件误差的鲁棒性能,同时确保其对数字图像的高精确识别。仿真搭建两种全连接ONN架构,即GridNet和FFTNet,其中使用马赫曾德尔干涉仪(mach-zehnder interfer... 优化了训练算法和学习率组合以提高光子神经网络(optical neural network,ONN)对器件误差的鲁棒性能,同时确保其对数字图像的高精确识别。仿真搭建两种全连接ONN架构,即GridNet和FFTNet,其中使用马赫曾德尔干涉仪(mach-zehnder interferometers,MZI)作为光子器件,并对含有器件误差的ONN进行了不同算法的训练,包括随机梯度下降(stochastic gradient descent,SGD)、均方根传递(root mean square prop,RMSprop)、适应性矩估计(adaptive moment estimation,Adam)和自适应梯度下降(adaptive gradient,Adagrad)。结果表明,在不同程度的器件误差下,FFTNet型ONN比GridNet型ONN更鲁棒。具体来说,采用学习率为0.005的RMSprop和Adam算法以及学习率为0.5的Adagrad算法训练的FFTNet型ONN在数字图像识别精度和器件误差鲁棒性上表现最佳。优化训练算法和学习率的组合可以有效提高ONN的鲁棒性能。 展开更多
关键词 光子神经网络 器件误差 马赫曾德尔干涉仪 梯度下降算法 学习率
在线阅读 下载PDF
基于改进深度确定性策略梯度算法的发电商竞价策略研究
17
作者 冯景康 荆朝霞 《电气自动化》 2026年第1期69-71,共3页
为了有效反映电力市场主体充分考虑自身禀赋以及可行申报空间后可能形成的复杂策略,提出了一种发电商竞价策略优化求解方法。首先,构建了考虑发电商申报容量灵活配置的电力市场竞价模型,应用深度确定性策略梯度算法对所提模型进行求解;... 为了有效反映电力市场主体充分考虑自身禀赋以及可行申报空间后可能形成的复杂策略,提出了一种发电商竞价策略优化求解方法。首先,构建了考虑发电商申报容量灵活配置的电力市场竞价模型,应用深度确定性策略梯度算法对所提模型进行求解;其次,通过改进原算法的探索策略,提高了算法探索效率;最后,通过算例对比不同报价模型以及算法下竞价策略求解的效果。结果表明,所提模型提升了竞价策略的灵活性,所提算法改进提升了算法探索效率。 展开更多
关键词 电力市场 经济调度 竞价策略 深度强化学习 深度确定性策略梯度算法
在线阅读 下载PDF
基于自适应梯度稀疏化增强的高效纵向联邦学习模型
18
作者 刘冬兰 赵夫慧 +3 位作者 王睿 张昊 刘新 常英贤 《济南大学学报(自然科学版)》 北大核心 2026年第2期289-296,共8页
为了解决纵向联邦学习模型训练过程中模型参数传输通信开销较大的问题,提出一种基于自适应梯度稀疏化增强的高效纵向联邦学习模型;在各参与方上传加密梯度之前,进行梯度稀疏化,进而优化模型参数传输效率;建立定量稀疏化阈值的端到端动... 为了解决纵向联邦学习模型训练过程中模型参数传输通信开销较大的问题,提出一种基于自适应梯度稀疏化增强的高效纵向联邦学习模型;在各参与方上传加密梯度之前,进行梯度稀疏化,进而优化模型参数传输效率;建立定量稀疏化阈值的端到端动态自适应映射算法,实现超参数阈值的动态自适应求解;基于各参与方私有数据,构建映射模型的输入特征指标集,实现数据驱动的梯度阈值求解过程,提高阈值的求解精度。实验仿真结果表明,相较于基准对比模型,提出的基于自适应梯度稀疏化增强的高效纵向联邦学习模型训练速度平均提升24.4%,且电力数据异常检测准确率平均提升9%,在保障检测准确率的同时,有效提高了纵向联邦学习的建模效率。 展开更多
关键词 纵向联邦学习 梯度高效传输算法 梯度稀疏化 神经网络
在线阅读 下载PDF
并行异速机批量混合流水车间动态调度方法研究
19
作者 昝云磊 刘贵杰 +4 位作者 王川 张玮 刘新宇 钟正彬 张金营 《机电工程》 北大核心 2026年第1期102-116,共15页
针对电站锅炉屏式管屏制造中多动态事件耦合导致的调度响应滞后及多目标协同优化难题,提出了一种基于深度强化学习的动态调度方法。首先,构建了并行异速机批量混合流水车间调度模型(LSHFSP-Qm),以精确描述异构机器速度、批量转移和能耗... 针对电站锅炉屏式管屏制造中多动态事件耦合导致的调度响应滞后及多目标协同优化难题,提出了一种基于深度强化学习的动态调度方法。首先,构建了并行异速机批量混合流水车间调度模型(LSHFSP-Qm),以精确描述异构机器速度、批量转移和能耗等生产约束条件;然后,基于双延迟深层确定性策略梯度(TD3)算法框架,采用长短时记忆(LSTM)网络重构了策略网络以增强时序特征提取能力,同时,设计了多级奖励机制,集成处理了时差、能耗和订单延迟的惩罚,从而构建了灵活自适应的动态事件驱动多目标重调度机制;最后,通过多组基准算例和车间实验验证了该方法的有效性。研究结果表明:改进TD3算法较传统深度强化学习方法提供了更好的近优解;在某屏式管屏车间中,调度效率提升了309.09%,动态事件反应速度提升了300%,综合生产效率间接提升了14.29%,订单拖期时间缩短了66.7%,生产线设备平均能耗降低了5%。该方法可有效协调多目标冲突,显著增强算法复杂动态环境中的适应性,可为装备制造业车间调度智能化转型提供可行方案。 展开更多
关键词 并行异速机批量混合流水车间调度问题 柔性制造系统及单元 双延迟深层确定性策略梯度算法 深度强化学习 动态调度 多目标优化
在线阅读 下载PDF
深度学习模型融合多模态超声和MRI特征对乳腺癌良恶性结节的诊断效能
20
作者 廖慧娴 王静 +4 位作者 林美花 农雨霖 马云川 廖丽萍 陈彩 《北京生物医学工程》 2026年第1期24-30,共7页
目的 探讨多模态超声与磁共振成像(magnetic resonance imaging,MRI)特征参数构建的XGBoost模型在乳腺结节良恶性早期诊断中的价值。方法 回顾性分析237例经手术病理证实为乳腺癌患者的临床及术前影像学资料,根据病理检查结果分为良性... 目的 探讨多模态超声与磁共振成像(magnetic resonance imaging,MRI)特征参数构建的XGBoost模型在乳腺结节良恶性早期诊断中的价值。方法 回顾性分析237例经手术病理证实为乳腺癌患者的临床及术前影像学资料,根据病理检查结果分为良性结节组(n=136)和恶性结节组(n=101)。绘制受试者操作特征(receiver operating characteristic,ROC)曲线并使用曲线下面积(area under the curve,AUC)评估多模态超声联合MRI对乳腺结节的诊断价值;采用最小绝对收缩和选择算子(least absolute shrinkage and selection operator,LASSO)以及10倍交叉验证筛选与乳腺癌恶性结节相关的最佳变量,并使用多因素回归分析对最佳变量进行进一步筛选;使用极端梯度提升算法(eXtreme gradient boosting,XGBoost)对筛选变量进行重要度排序并通过SHAP方法对模型进行可视化分析。结果 两组患者在常规超声、多普勒彩超特征及MRI定性参数,包括血流信号、有无微小钙化、边界、形态、内部回声、肿瘤形状、边缘、内部强化及时间-信号强度曲线(time-signal intensity curve,TIC)曲线类型间均存在差异(P<0.05)。恶性肿瘤组中剪切波弹性成像(shear wave elastography,SWE)参数——最硬和局部区域平均、最小、最大应变弹性模量和模量标准差,MRI特征参数——容量转移常数(volume transfer constant,Ktrans)、速率常数(rate constant,Kep)和血管外细胞外间隙容积分数(extravascular extracellular volume fraction,Ve)均高于良性肿瘤组,而表观扩散系数(apparent diffusion coefficient,ADC)均低于良性肿瘤组(P<0.05)。LASSO和多因素回归筛选年龄、最大直径、ADC、Ktrans和Emax是预测乳腺结节良恶性的最佳变量,SHAP算法分析XGBoost模型各变量的决策权重,结果显示Emax、Ktrans及ADC在XGBoost中对于预测乳腺癌恶性结节具有较高的决策权重。结论 本研究初步证明基于多模态超声(Emax)与MRI特征(Ktrans和ADC)构建的XGBoost模型在乳腺癌良恶性结节鉴别诊断中具有可行性和有效性。 展开更多
关键词 乳腺癌 剪切波弹性成像 极端梯度提升算法 磁共振成像 深度学习模型
暂未订购
上一页 1 2 15 下一页 到第
使用帮助 返回顶部