期刊文献+
共找到3,280篇文章
< 1 2 164 >
每页显示 20 50 100
Rapid pathologic grading-based diagnosis of esophageal squamous cell carcinoma via Raman spectroscopy and a deep learning algorithm 被引量:1
1
作者 Xin-Ying Yu Jian Chen +2 位作者 Lian-Yu Li Feng-En Chen Qiang He 《World Journal of Gastroenterology》 2025年第14期32-46,共15页
BACKGROUND Esophageal squamous cell carcinoma is a major histological subtype of esophageal cancer.Many molecular genetic changes are associated with its occurrence.Raman spectroscopy has become a new method for the e... BACKGROUND Esophageal squamous cell carcinoma is a major histological subtype of esophageal cancer.Many molecular genetic changes are associated with its occurrence.Raman spectroscopy has become a new method for the early diagnosis of tumors because it can reflect the structures of substances and their changes at the molecular level.AIM To detect alterations in Raman spectral information across different stages of esophageal neoplasia.METHODS Different grades of esophageal lesions were collected,and a total of 360 groups of Raman spectrum data were collected.A 1D-transformer network model was proposed to handle the task of classifying the spectral data of esophageal squamous cell carcinoma.In addition,a deep learning model was applied to visualize the Raman spectral data and interpret their molecular characteristics.RESULTS A comparison among Raman spectral data with different pathological grades and a visual analysis revealed that the Raman peaks with significant differences were concentrated mainly at 1095 cm^(-1)(DNA,symmetric PO,and stretching vibration),1132 cm^(-1)(cytochrome c),1171 cm^(-1)(acetoacetate),1216 cm^(-1)(amide III),and 1315 cm^(-1)(glycerol).A comparison among the training results of different models revealed that the 1Dtransformer network performed best.A 93.30%accuracy value,a 96.65%specificity value,a 93.30%sensitivity value,and a 93.17%F1 score were achieved.CONCLUSION Raman spectroscopy revealed significantly different waveforms for the different stages of esophageal neoplasia.The combination of Raman spectroscopy and deep learning methods could significantly improve the accuracy of classification. 展开更多
关键词 Raman spectroscopy Esophageal neoplasia Early diagnosis deep learning algorithm Rapid pathologic grading
暂未订购
Hybrid Beamforming for MU-MISO Communication via Deep Unfolding
2
作者 Liu Dangpeng He Xin He Haoming 《China Communications》 2026年第2期260-267,共8页
In hybrid beamforming design using the conventional gradient projection(GP)algorithm,it is common to use a fixed step size,which results in a slow convergence rate and unsatisfactory achievable rate performance.This p... In hybrid beamforming design using the conventional gradient projection(GP)algorithm,it is common to use a fixed step size,which results in a slow convergence rate and unsatisfactory achievable rate performance.This paper employs a deep unfolding algorithm within a small fixed number of iterations to tackle the hybrid beamforming optimization problem.The optimal step size is obtained by combining the conventional GP algorithm with the deep learning technique,and every step in deep learning is explainable.Simulation results show that the proposed deep unfolding algorithm demonstrates a lower computational time and superior achievable rate performance than the conventional GP algorithm. 展开更多
关键词 deep unfolding algorithm hybrid beamforming unit modulus constraint
在线阅读 下载PDF
Deep neural network algorithm for estimating maize biomass based on simulated Sentinel 2A vegetation indices and leaf area index 被引量:15
3
作者 Xiuliang Jin Zhenhai Li +2 位作者 Haikuan Feng Zhibin Ren Shaokun Li 《The Crop Journal》 SCIE CAS CSCD 2020年第1期87-97,共11页
Accurate estimation of biomass is necessary for evaluating crop growth and predicting crop yield.Biomass is also a key trait in increasing grain yield by crop breeding.The aims of this study were(i)to identify the bes... Accurate estimation of biomass is necessary for evaluating crop growth and predicting crop yield.Biomass is also a key trait in increasing grain yield by crop breeding.The aims of this study were(i)to identify the best vegetation indices for estimating maize biomass,(ii)to investigate the relationship between biomass and leaf area index(LAI)at several growth stages,and(iii)to evaluate a biomass model using measured vegetation indices or simulated vegetation indices of Sentinel 2A and LAI using a deep neural network(DNN)algorithm.The results showed that biomass was associated with all vegetation indices.The three-band water index(TBWI)was the best vegetation index for estimating biomass and the corresponding R2,RMSE,and RRMSE were 0.76,2.84 t ha−1,and 38.22%respectively.LAI was highly correlated with biomass(R2=0.89,RMSE=2.27 t ha−1,and RRMSE=30.55%).Estimated biomass based on 15 hyperspectral vegetation indices was in a high agreement with measured biomass using the DNN algorithm(R2=0.83,RMSE=1.96 t ha−1,and RRMSE=26.43%).Biomass estimation accuracy was further increased when LAI was combined with the 15 vegetation indices(R2=0.91,RMSE=1.49 t ha−1,and RRMSE=20.05%).Relationships between the hyperspectral vegetation indices and biomass differed from relationships between simulated Sentinel 2A vegetation indices and biomass.Biomass estimation from the hyperspectral vegetation indices was more accurate than that from the simulated Sentinel 2A vegetation indices(R2=0.87,RMSE=1.84 t ha−1,and RRMSE=24.76%).The DNN algorithm was effective in improving the estimation accuracy of biomass.It provides a guideline for estimating biomass of maize using remote sensing technology and the DNN algorithm in this region. 展开更多
关键词 Biomass estimation MAIZE Vegetation indices deep neural network algorithm LAI
在线阅读 下载PDF
Deep Learning and Holt-Trend Algorithms for Predicting Covid-19 Pandemic 被引量:3
4
作者 Theyazn H.H.Aldhyani Melfi Alrasheed +3 位作者 Mosleh Hmoud Al-Adaileh Ahmed Abdullah Alqarni Mohammed Y.Alzahrani Ahmed H.Alahmadi 《Computers, Materials & Continua》 SCIE EI 2021年第5期2141-2160,共20页
The Covid-19 epidemic poses a serious public health threat to the world,where people with little or no pre-existing human immunity can be more vulnerable to its effects.Thus,developing surveillance systems for predict... The Covid-19 epidemic poses a serious public health threat to the world,where people with little or no pre-existing human immunity can be more vulnerable to its effects.Thus,developing surveillance systems for predicting the Covid-19 pandemic at an early stage could save millions of lives.In this study,a deep learning algorithm and a Holt-trend model are proposed to predict the coronavirus.The Long-Short Term Memory(LSTM)and Holttrend algorithms were applied to predict confirmed numbers and death cases.The real time data used has been collected from theWorld Health Organization(WHO).In the proposed research,we have considered three countries to test the proposed model,namely Saudi Arabia,Spain and Italy.The results suggest that the LSTM models show better performance in predicting the cases of coronavirus patients.Standard measure performance Mean squared Error(MSE),Root Mean Squared Error(RMSE),Mean error and correlation are employed to estimate the results of the proposed models.The empirical results of the LSTM,using the correlation metrics,are 99.94%,99.94%and 99.91%in predicting the number of confirmed cases in the three countries.As far as the results of the LSTM model in predicting the number of death of Covid-19,they are 99.86%,98.876%and 99.16%with respect to Saudi Arabia,Italy and Spain respectively.Similarly,the experiment’s results of the Holt-Trend model in predicting the number of confirmed cases of Covid-19,using the correlation metrics,are 99.06%,99.96%and 99.94%,whereas the results of the Holt-Trend model in predicting the number of death cases are 99.80%,99.96%and 99.94%with respect to the Saudi Arabia,Italy and Spain respectively.The empirical results indicate the efficient performance of the presented model in predicting the number of confirmed and death cases of Covid-19 in these countries.Such findings provide better insights regarding the future of Covid-19 this pandemic in general.The results were obtained by applying time series models,which need to be considered for the sake of saving the lives of many people. 展开更多
关键词 deep learning algorithm holt-trend prediction Covid-19 machine learning
在线阅读 下载PDF
Power System Resiliency and Wide Area Control Employing Deep Learning Algorithm 被引量:1
5
作者 Pandia Rajan Jeyaraj Aravind Chellachi Kathiresan +3 位作者 Siva Prakash Asokan Edward Rajan Samuel Nadar Hegazy Rezk Thanikanti Sudhakar Babu 《Computers, Materials & Continua》 SCIE EI 2021年第7期553-567,共15页
The power transfer capability of the smart transmission gridconnected networks needs to be reduced by inter-area oscillations.Due to the fact that inter-area modes of oscillations detain and make instability of power ... The power transfer capability of the smart transmission gridconnected networks needs to be reduced by inter-area oscillations.Due to the fact that inter-area modes of oscillations detain and make instability of power transmission networks.This fact is more noticeable in smart grid-connected systems.The smart grid infrastructure has more renewable energy resources installed for its operation.To overcome this problem,a deep learning widearea controller is proposed for real-time parameter control and smart power grid resilience on oscillations inter-area modes.The proposed Deep Wide Area Controller(DWAC)uses the Deep Belief Network(DBN).The network weights are updated based on real-time data from Phasor measurement units.Resilience assessment based on failure probability,financial impact,and time-series data in grid failure management determine the norm H2.To demonstrate the effectiveness of the proposed framework,a time-domain simulation case study based on the IEEE-39 bus system was performed.For a one-channel attack on the test system,the resiliency index increased to 0.962,and inter-area dampingξwas reduced to 0.005.The obtained results validate the proposed deep learning algorithm’s efficiency on damping inter-area and local oscillation on the 2-channel attack as well.Results also offer robust management of power system resilience and timely control of the operating conditions. 展开更多
关键词 Neural network deep learning algorithm low-frequency oscillation resiliency assessment smart grid wide-area control
在线阅读 下载PDF
Identification of High-Risk Scenarios for Cascading Failures in New Energy Power Grids Based on Deep Embedding Clustering Algorithms 被引量:1
6
作者 Xueting Cheng Ziqi Zhang +1 位作者 Yueshuang Bao Huiping Zheng 《Energy Engineering》 EI 2023年第11期2517-2529,共13页
At present,the proportion of new energy in the power grid is increasing,and the random fluctuations in power output increase the risk of cascading failures in the power grid.In this paper,we propose a method for ident... At present,the proportion of new energy in the power grid is increasing,and the random fluctuations in power output increase the risk of cascading failures in the power grid.In this paper,we propose a method for identifying high-risk scenarios of interlocking faults in new energy power grids based on a deep embedding clustering(DEC)algorithm and apply it in a risk assessment of cascading failures in different operating scenarios for new energy power grids.First,considering the real-time operation status and system structure of new energy power grids,the scenario cascading failure risk indicator is established.Based on this indicator,the risk of cascading failure is calculated for the scenario set,the scenarios are clustered based on the DEC algorithm,and the scenarios with the highest indicators are selected as the significant risk scenario set.The results of simulations with an example power grid show that our method can effectively identify scenarios with a high risk of cascading failures from a large number of scenarios. 展开更多
关键词 New energy power system deep embedding clustering algorithms cascading failures
在线阅读 下载PDF
Weighted adaptive filtering algorithm for carrier tracking of deep space signal 被引量:8
7
作者 Song Qingping Liu Rongke 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2015年第4期1236-1244,共9页
Carrier tracking is laid great emphasis and is the difficulty of signal processing in deep space communication system.For the autonomous radio receiving system in deep space, the tracking of the received signal is aut... Carrier tracking is laid great emphasis and is the difficulty of signal processing in deep space communication system.For the autonomous radio receiving system in deep space, the tracking of the received signal is automatic when the signal to noise ratio(SNR) is unknown.If the frequency-locked loop(FLL) or the phase-locked loop(PLL) with fixed loop bandwidth, or Kalman filter with fixed noise variance is adopted, the accretion of estimation error and filter divergence may be caused.Therefore, the Kalman filter algorithm with adaptive capability is adopted to suppress filter divergence.Through analyzing the inadequacies of Sage–Husa adaptive filtering algorithm, this paper introduces a weighted adaptive filtering algorithm for autonomous radio.The introduced algorithm may resolve the defect of Sage–Husa adaptive filtering algorithm that the noise covariance matrix is negative definite in filtering process.In addition, the upper diagonal(UD) factorization and innovation adaptive control are used to reduce model estimation errors,suppress filter divergence and improve filtering accuracy.The simulation results indicate that compared with the Sage–Husa adaptive filtering algorithm, this algorithm has better capability to adapt to the loop, convergence performance and tracking accuracy, which contributes to the effective and accurate carrier tracking in low SNR environment, showing a better application prospect. 展开更多
关键词 Adaptive algorithms Carrier tracking deep space communicationKalman filters Tracking accuracy WEIGHTED
原文传递
Extended Deep Learning Algorithm for Improved Brain Tumor Diagnosis System
8
作者 M.Adimoolam K.Maithili +7 位作者 N.M.Balamurugan R.Rajkumar S.Leelavathy Raju Kannadasan Mohd Anul Haq Ilyas Khan ElSayed M.Tag El Din Arfat Ahmad Khan 《Intelligent Automation & Soft Computing》 2024年第1期33-55,共23页
At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns st... At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns still need enhancement,particularly accuracy,sensitivity,false positive and false negative,to improve the brain tumor prediction system symmetrically.Therefore,this work proposed an Extended Deep Learning Algorithm(EDLA)to measure performance parameters such as accuracy,sensitivity,and false positive and false negative rates.In addition,these iterated measures were analyzed by comparing the EDLA method with the Convolutional Neural Network(CNN)way further using the SPSS tool,and respective graphical illustrations were shown.The results were that the mean performance measures for the proposed EDLA algorithm were calculated,and those measured were accuracy(97.665%),sensitivity(97.939%),false positive(3.012%),and false negative(3.182%)for ten iterations.Whereas in the case of the CNN,the algorithm means accuracy gained was 94.287%,mean sensitivity 95.612%,mean false positive 5.328%,and mean false negative 4.756%.These results show that the proposed EDLA method has outperformed existing algorithms,including CNN,and ensures symmetrically improved parameters.Thus EDLA algorithm introduces novelty concerning its performance and particular activation function.This proposed method will be utilized effectively in brain tumor detection in a precise and accurate manner.This algorithm would apply to brain tumor diagnosis and be involved in various medical diagnoses aftermodification.If the quantity of dataset records is enormous,then themethod’s computation power has to be updated. 展开更多
关键词 Brain tumor extended deep learning algorithm convolution neural network tumor detection deep learning
在线阅读 下载PDF
Optimizing Deep Learning Parameters Using Genetic Algorithm for Object Recognition and Robot Grasping 被引量:2
9
作者 Delowar Hossain Genci Capi Mitsuru Jindai 《Journal of Electronic Science and Technology》 CAS CSCD 2018年第1期11-15,共5页
The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We... The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We propose a genetic algorithm(GA) based deep belief neural network(DBNN) method for robot object recognition and grasping purpose. This method optimizes the parameters of the DBNN method, such as the number of hidden units, the number of epochs, and the learning rates, which would reduce the error rate and the network training time of object recognition. After recognizing objects, the robot performs the pick-andplace operations. We build a database of six objects for experimental purpose. Experimental results demonstrate that our method outperforms on the optimized robot object recognition and grasping tasks. 展开更多
关键词 deep learning(DL) deep belief neural network(DBNN) genetic algorithm(GA) object recognition robot grasping
在线阅读 下载PDF
Genetic algorithm in seismic waveform inversion and its application in deep seismic sounding data interpretation 被引量:1
10
作者 王夫运 张先康 《Acta Seismologica Sinica(English Edition)》 EI CSCD 2006年第2期163-172,共10页
A genetic algorithm of body waveform inversion is presented for better understanding of crustal and upper mantle structures with deep seismic sounding (DSS) waveform data. General reflection and transmission synthet... A genetic algorithm of body waveform inversion is presented for better understanding of crustal and upper mantle structures with deep seismic sounding (DSS) waveform data. General reflection and transmission synthetic seismogram algorithm, which is capable of calculating the response of thin alternating high and low velocity layers, is applied as a solution for forward modeling, and the genetic algorithm is used to find the optimal solution of the inverse problem. Numerical tests suggest that the method has the capability of resolving low-velocity layers, thin alternating high and low velocity layers, and noise suppression. Waveform inversion using P-wave records from Zeku, Xiahe and Lintao shots in the seismic wide-angle reflection/refraction survey along northeastern Qinghai-Xizang (Tibeteau) Plateau has revealed fine structures of the bottom of the upper crust and alternating layers in the middle/lower crust and topmost upper mantle. 展开更多
关键词 genetic algorithm waveform inversion numerical test deep seismic sounding fine crustal structure
在线阅读 下载PDF
基于改进DeeplabV3+算法的地铁轨行区识别
11
作者 刘嘉宁 赵才友 张银喜 《铁道建筑》 北大核心 2025年第2期139-145,共7页
为解决现有基于深度学习的算法在地铁轨道区域识别上目标分割不精确、计算和存储资源需求大、检测速度慢的问题,提出了一种基于改进DeeplabV3+算法的地铁轨道区域识别算法。该模型将主干网络替换为有较低的模型大小和计算复杂度的轻量... 为解决现有基于深度学习的算法在地铁轨道区域识别上目标分割不精确、计算和存储资源需求大、检测速度慢的问题,提出了一种基于改进DeeplabV3+算法的地铁轨道区域识别算法。该模型将主干网络替换为有较低的模型大小和计算复杂度的轻量级卷积神经网络MobileNetV2,引入注意力机制CBAM(Channel Attention Module)来提高网络对特征的感知能力,并改进ASPP(Atrous Spatial Pyramid Pooling)使其能编码多尺度信息。应用自制数据集验证本文方法的有效性,并与经典DeeplabV3+、U-net、MaskR-CNN算法进行对比分析。结果表明:本文算法精确率、准确率、召回率、平均交并比分别为94.57%、94.43%、93.49%、90.24%,训练时长6.5 h,单张图像预测时长51.78 ms,模型大小为23 MB,均优于其他三种算法。本文算法在提高对轨道区域图像分割性能的同时,增强了模型的训练和检测效率,具有运用于地铁轨道区域识别的可行性和实用性。 展开更多
关键词 地铁 轨道区域识别 深度学习 语义分割 deeplabV3+算法
在线阅读 下载PDF
Directional Routing Algorithm for Deep Space Optical Network
12
作者 Lei Guo Xiaorui Wang +3 位作者 Yejun Liu Pengchao Han Yamin Xie Yuchen Tan 《China Communications》 SCIE CSCD 2017年第1期158-168,共11页
With the development of science, economy and society, the needs for research and exploration of deep space have entered a rapid and stable development stage. Deep Space Optical Network(DSON) is expected to become an i... With the development of science, economy and society, the needs for research and exploration of deep space have entered a rapid and stable development stage. Deep Space Optical Network(DSON) is expected to become an important foundation and inevitable development trend of future deepspace communication. In this paper, we design a deep space node model which is capable of combining the space division multiplexing with frequency division multiplexing. Furthermore, we propose the directional flooding routing algorithm(DFRA) for DSON based on our node model. This scheme selectively forwards the data packets in the routing, so that the energy consumption can be reduced effectively because only a portion of nodes will participate the flooding routing. Simulation results show that, compared with traditional flooding routing algorithm(TFRA), the DFRA can avoid the non-directional and blind transmission. Therefore, the energy consumption in message routing will be reduced and the lifespan of DSON can also be prolonged effectively. Although the complexity of routing implementation is slightly increased compared with TFRA, the energy of nodes can be saved and the transmission rate is obviously improved in DFRA. Thus the overall performance of DSON can be significantly improved. 展开更多
关键词 deep space optical network routing algorithm directional flooding routing algorithm traditional flooding routing algorithm
在线阅读 下载PDF
A Novel Optimization Algorithm for Calibrating Pollutant Degradation Coefficient in Deep Tunnel Based on Storm Water Management Model
13
作者 Kaiyuan Zheng Ying Zhang 《Journal of Geoscience and Environment Protection》 2024年第12期207-217,共11页
Aiming at working out more accurate pollutant degradation coefficient of the deep tunnel system, this work puts forward a novel optimized algorithm to calibrate such coefficient and compare it with the ordinary fittin... Aiming at working out more accurate pollutant degradation coefficient of the deep tunnel system, this work puts forward a novel optimized algorithm to calibrate such coefficient and compare it with the ordinary fitting method. This algorithm incorporates the outlier filtration mechanism and the gradient descent mechanism to improve its performance, and the calibration result is substituted into storm water management model (SWMM) source codes to validate its effectiveness between simulated and observed data. COD, NH3-N, TN and TP are chosen as pollutant indicators of the observed data, and the RMSE, MSE and ME are selected as indicators to present the efficiency. The results show that the outlier filtration mechanism obtains better performance than fitting method, with the gradient descent mechanism nearly reduces 92.42% of the iterative amounts and improves 55 times of the computation efficiency than the ordinary iterative method, such algorithm is expected to function better with substantial observed data. 展开更多
关键词 SWMM Pollutant Degradation Coefficient deep Tunnel System Optimized algorithm
在线阅读 下载PDF
基于改进DeepLabv3+卷积神经网络的废钢智能判定算法 被引量:1
14
作者 吉孟扬 施凯旋 +1 位作者 郭宇 杨博晟 《轧钢》 北大核心 2025年第5期150-158,共9页
废钢等级判定是实现钢铁合理循环利用的关键环节。针对现有废钢判定方法检测精度不足、效率较低等问题,本文提出了一种基于改进DeepLabv3+卷积神经网络的废钢智能判定算法,该算法在空洞空间金字塔池化(ASPP)层后增加混合注意力机制,并... 废钢等级判定是实现钢铁合理循环利用的关键环节。针对现有废钢判定方法检测精度不足、效率较低等问题,本文提出了一种基于改进DeepLabv3+卷积神经网络的废钢智能判定算法,该算法在空洞空间金字塔池化(ASPP)层后增加混合注意力机制,并使用深度条带空洞卷积代替ASPP层中部分空洞卷积;通过构建不同料型、不同视角、不同时间段等实际场景的废钢堆图像数据集,训练获得了废钢智能判定模型。改进型算法能有效提升网络的检测精度,在以ResNet作为主干网的对照组中,平均交并比m_(IoU)提升约2.54%,在以Xception作为主干网的对照组中,m_(IoU)提升约4.42%,有效提高了废钢语义分割精度;通过厚度和距离两因素建立转换模型,完成各类废钢在图片中占据的像素点占比到实际质量占比的转换,并使用全连接网络方式将算法得出的结果和工人实际结果进行拟合。本文使用大量数据对所提出的模型进行实验,实验结果表明:本文模型判定精度能够达到93.75%,明显优于现有方法,并且能够满足实际生产需要。 展开更多
关键词 废钢 深度学习 语义分割 deepLabv3+卷积神经网络 智能算法
原文传递
基于改进Deep Labv3+算法在矿山岩体阴影数字化研究
15
作者 薛天山 李永强 +3 位作者 郝跃 贺东东 武国鹏 白怡明 《非金属矿》 2025年第2期87-91,共5页
为了准确获取岩体特征信息,提出一种基于改进Deep Labv3+算法的矿山岩体阴影数字化方法。针对Deep Labv3+算法的特征提取网络,运用轻量级的MobileNetV2网络替换原来的Xception以减少参数计算量,提高计算速度,并动态调整通道权重。通过... 为了准确获取岩体特征信息,提出一种基于改进Deep Labv3+算法的矿山岩体阴影数字化方法。针对Deep Labv3+算法的特征提取网络,运用轻量级的MobileNetV2网络替换原来的Xception以减少参数计算量,提高计算速度,并动态调整通道权重。通过改进后的算法实现矿山岩体结构面阴影的自动识别及数字化处理,包括结构面阴影骨架提取、交点消除、长度计算等。结果表明,改进Deep Labv3+算法的像素准确率最大值达93.65%,平均值为87.65%,类别准确率平均值为89.12%,平均交并比为79.34%,均优于对比算法。矿山岩体结构面阴影数字化处理结果偏差小,可靠性高,能较好反映实际测量情况。该方法提升了矿山岩体阴影数字化的有效性与可靠性,为矿山岩体处理提供了新的有力途径。 展开更多
关键词 改进deep Labv3+算法 矿山 岩石工程 图像分割 结构面阴影
在线阅读 下载PDF
基于ICPO优化VMD耦合深度学习模型的中短期风电功率预测
16
作者 黄伟 刘彬 +2 位作者 李火坤 黄俊 黄梓阳 《太阳能学报》 北大核心 2026年第2期546-557,共12页
为提高风电功率的预测精度,增强混合模型的泛化性能,提出一种基于变分模态分解(VMD)耦合双向时域卷积网络(BiTCN)、双向长短期记忆网络(BiLSTM)和注意力机制(Attention)的混合中短期风电预测模型,并利用改进的冠豪猪算法(ICPO)优化VMD... 为提高风电功率的预测精度,增强混合模型的泛化性能,提出一种基于变分模态分解(VMD)耦合双向时域卷积网络(BiTCN)、双向长短期记忆网络(BiLSTM)和注意力机制(Attention)的混合中短期风电预测模型,并利用改进的冠豪猪算法(ICPO)优化VMD分解参数以及混合模型参数。该方法首先利用ICPO对VMD核心参数(K值和惩罚系数α)寻优,将原有的风电功率序列进行VMD分解;再引入ICPO对BiTCN-BiLSTM-Attention深度学习模型的超参数进行自动寻优,针对分解后的各分量分别建立ICPO-BiTCN-BiLSTM-Attention预测模型;最后叠加各分量的预测值得到最终预测值。某风电场实例验证表明,相比于单一预测模型和常规组合模型,提出的耦合模型在功率预测精度与泛化性能上均实现了显著提升。 展开更多
关键词 风电 预测 深度学习 自适应算法 变分模态分解
原文传递
深度学习优化算法在全波形反演中的性能对比
17
作者 武国宁 吴春勇 +1 位作者 杨晓璇 李金丘 《中国石油大学学报(自然科学版)》 北大核心 2026年第1期45-54,共10页
传统优化算法在求解全波形反演(FWI)时常面临局部极小值、对初始模型高度敏感以及收敛速度较慢等问题,制约了反演的稳定性与成像精度。针对上述问题,系统分析并比较深度学习领域中常用的多种自适应优化算法,包括随机梯度下降法(SGD)、A... 传统优化算法在求解全波形反演(FWI)时常面临局部极小值、对初始模型高度敏感以及收敛速度较慢等问题,制约了反演的稳定性与成像精度。针对上述问题,系统分析并比较深度学习领域中常用的多种自适应优化算法,包括随机梯度下降法(SGD)、Adam、Adagrad、RMSProp和Nadam等,并通过二维地震数据对比试验评估其在FWI中的收敛特性与成像效果。结果表明,自适应矩估计算法(如RAdam和NAdam)在收敛速度、稳定性及反演成像质量方面具有明显优势;而Adagrad在反演精度和迭代稳定性上表现较弱。合理选择深度学习优化策略对于提升FWI的反演稳定性和成像分辨率具有重要意义,可为复杂地下结构条件下的全波形反演提供新的方法支持与理论参考。 展开更多
关键词 全波形反演 深度学习 优化算法 自动微分 地震勘探
在线阅读 下载PDF
基于TD3强化学习的光储微网双向DC-DC变换器自抗扰控制研究
18
作者 马幼捷 胡钰 +3 位作者 周雪松 闫凤祥 白鑫 陶珑 《太阳能学报》 北大核心 2026年第1期202-213,共12页
考虑到高比例新能源接入带来的不确定性问题会导致微电网直流母线电压的大幅波动难以平抑,该文提出一种基于双延迟深度确定性策略梯度算法(TD3)强化学习的双向DC-DC变换器的自抗扰控制策略。首先,利用线性扩张状态观测器进行系统重构来... 考虑到高比例新能源接入带来的不确定性问题会导致微电网直流母线电压的大幅波动难以平抑,该文提出一种基于双延迟深度确定性策略梯度算法(TD3)强化学习的双向DC-DC变换器的自抗扰控制策略。首先,利用线性扩张状态观测器进行系统重构来实现对总扰动的估计补偿,并就控制策略的跟踪性和抗扰性进行频域分析。接着,通过大量的仿真交互自学习获得观测器参数来智能调节神经网络的权值更新方式,优化奖励函数形式,并在线利用网络进行参数实时调度,使其充分训练以实现近似最优控制律。最后,利用数字仿真平台和小功率实验验证了在多工况下所提控制策略较双闭环PI控制和传统线性自抗扰控制具有更小的电压偏差及更快的响应速度等优越的动稳态性能,有效提升了直流母线电压的抗扰能力。 展开更多
关键词 双向DC-DC变换器 光储微电网 自抗扰控制 TD3深度强化学习算法
原文传递
基于YOLOv5 模拟敌友智能判别算法
19
作者 杨红莉 于浩 +3 位作者 季睿航 李文卓 叶锦泽 梁远生 《现代信息科技》 2026年第4期44-48,共5页
针对军事模拟环境中敌我目标外观相似导致的识别难题,文章提出一种基于YOLOv5的智能判别算法。通过构建多维度战术仿真数据集,采用红、蓝、黑三类标注体系分别标识友军、敌军与人质,并结合多源数据增强与样本平衡技术提升数据质量。在... 针对军事模拟环境中敌我目标外观相似导致的识别难题,文章提出一种基于YOLOv5的智能判别算法。通过构建多维度战术仿真数据集,采用红、蓝、黑三类标注体系分别标识友军、敌军与人质,并结合多源数据增强与样本平衡技术提升数据质量。在算法层面,引入注意力机制与多层次特征融合策略,增强模型对相似目标的区分能力;在计算层面,采用轻量化推理优化,兼顾检测速度与精度。实验结果表明,该模型在自建数据集上平均精度均值(mAP@0.5)达到0.98,能够有效实现敌我单位与人质的准确识别与统计,为复杂战场环境下的智能决策提供可靠技术支持。 展开更多
关键词 YOLOv5算法 深度学习 目标检测 多特征融合
在线阅读 下载PDF
结合深度核学习与高斯过程的边坡稳定性预测方法
20
作者 李书 喻国荣 +1 位作者 付兵杰 鲍海洲 《水力发电》 2026年第2期40-47,共8页
鉴于边坡特征之间、特征与稳定性判定之间的复杂非线性关系,经典的高斯过程边坡稳定性预测方法在复杂结构建模上表现有限且难以处理大规模的边坡数据,提出一种结合深度核学习与高斯过程的边坡稳定性预测方法。首先,利用多层前馈网络对... 鉴于边坡特征之间、特征与稳定性判定之间的复杂非线性关系,经典的高斯过程边坡稳定性预测方法在复杂结构建模上表现有限且难以处理大规模的边坡数据,提出一种结合深度核学习与高斯过程的边坡稳定性预测方法。首先,利用多层前馈网络对边坡特征进行深度提取,再将隐空间映射到带有径向基函数核的高斯过程,实现非参数不确定性量化。模型通过最大化边缘对数似然函数优化神经网络权重与核超参数,可端到端学习数据驱动的最优核。在公开的Kaggle数据集上的试验表明,所提方法较经典机器学习算法随机森林RF、支持向量机SVM、高斯过程回归GPR,以及深度学习方法门控循环单元GRU、深度神经网络DNN在均方根误差、平均绝对误差和决定系数等指标上均取得最佳结果,为边坡灾害智能预警提供了新的技术支撑。 展开更多
关键词 边坡稳定性 预测算法 深度核学习 高斯过程回归 经典机器学习算法
在线阅读 下载PDF
上一页 1 2 164 下一页 到第
使用帮助 返回顶部