期刊文献+
共找到3,081篇文章
< 1 2 155 >
每页显示 20 50 100
Rapid pathologic grading-based diagnosis of esophageal squamous cell carcinoma via Raman spectroscopy and a deep learning algorithm
1
作者 Xin-Ying Yu Jian Chen +2 位作者 Lian-Yu Li Feng-En Chen Qiang He 《World Journal of Gastroenterology》 2025年第14期32-46,共15页
BACKGROUND Esophageal squamous cell carcinoma is a major histological subtype of esophageal cancer.Many molecular genetic changes are associated with its occurrence.Raman spectroscopy has become a new method for the e... BACKGROUND Esophageal squamous cell carcinoma is a major histological subtype of esophageal cancer.Many molecular genetic changes are associated with its occurrence.Raman spectroscopy has become a new method for the early diagnosis of tumors because it can reflect the structures of substances and their changes at the molecular level.AIM To detect alterations in Raman spectral information across different stages of esophageal neoplasia.METHODS Different grades of esophageal lesions were collected,and a total of 360 groups of Raman spectrum data were collected.A 1D-transformer network model was proposed to handle the task of classifying the spectral data of esophageal squamous cell carcinoma.In addition,a deep learning model was applied to visualize the Raman spectral data and interpret their molecular characteristics.RESULTS A comparison among Raman spectral data with different pathological grades and a visual analysis revealed that the Raman peaks with significant differences were concentrated mainly at 1095 cm^(-1)(DNA,symmetric PO,and stretching vibration),1132 cm^(-1)(cytochrome c),1171 cm^(-1)(acetoacetate),1216 cm^(-1)(amide III),and 1315 cm^(-1)(glycerol).A comparison among the training results of different models revealed that the 1Dtransformer network performed best.A 93.30%accuracy value,a 96.65%specificity value,a 93.30%sensitivity value,and a 93.17%F1 score were achieved.CONCLUSION Raman spectroscopy revealed significantly different waveforms for the different stages of esophageal neoplasia.The combination of Raman spectroscopy and deep learning methods could significantly improve the accuracy of classification. 展开更多
关键词 Raman spectroscopy Esophageal neoplasia Early diagnosis deep learning algorithm Rapid pathologic grading
暂未订购
Extended Deep Learning Algorithm for Improved Brain Tumor Diagnosis System
2
作者 M.Adimoolam K.Maithili +7 位作者 N.M.Balamurugan R.Rajkumar S.Leelavathy Raju Kannadasan Mohd Anul Haq Ilyas Khan ElSayed M.Tag El Din Arfat Ahmad Khan 《Intelligent Automation & Soft Computing》 2024年第1期33-55,共23页
At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns st... At present,the prediction of brain tumors is performed using Machine Learning(ML)and Deep Learning(DL)algorithms.Although various ML and DL algorithms are adapted to predict brain tumors to some range,some concerns still need enhancement,particularly accuracy,sensitivity,false positive and false negative,to improve the brain tumor prediction system symmetrically.Therefore,this work proposed an Extended Deep Learning Algorithm(EDLA)to measure performance parameters such as accuracy,sensitivity,and false positive and false negative rates.In addition,these iterated measures were analyzed by comparing the EDLA method with the Convolutional Neural Network(CNN)way further using the SPSS tool,and respective graphical illustrations were shown.The results were that the mean performance measures for the proposed EDLA algorithm were calculated,and those measured were accuracy(97.665%),sensitivity(97.939%),false positive(3.012%),and false negative(3.182%)for ten iterations.Whereas in the case of the CNN,the algorithm means accuracy gained was 94.287%,mean sensitivity 95.612%,mean false positive 5.328%,and mean false negative 4.756%.These results show that the proposed EDLA method has outperformed existing algorithms,including CNN,and ensures symmetrically improved parameters.Thus EDLA algorithm introduces novelty concerning its performance and particular activation function.This proposed method will be utilized effectively in brain tumor detection in a precise and accurate manner.This algorithm would apply to brain tumor diagnosis and be involved in various medical diagnoses aftermodification.If the quantity of dataset records is enormous,then themethod’s computation power has to be updated. 展开更多
关键词 Brain tumor extended deep learning algorithm convolution neural network tumor detection deep learning
在线阅读 下载PDF
基于改进DeeplabV3+算法的地铁轨行区识别
3
作者 刘嘉宁 赵才友 张银喜 《铁道建筑》 北大核心 2025年第2期139-145,共7页
为解决现有基于深度学习的算法在地铁轨道区域识别上目标分割不精确、计算和存储资源需求大、检测速度慢的问题,提出了一种基于改进DeeplabV3+算法的地铁轨道区域识别算法。该模型将主干网络替换为有较低的模型大小和计算复杂度的轻量... 为解决现有基于深度学习的算法在地铁轨道区域识别上目标分割不精确、计算和存储资源需求大、检测速度慢的问题,提出了一种基于改进DeeplabV3+算法的地铁轨道区域识别算法。该模型将主干网络替换为有较低的模型大小和计算复杂度的轻量级卷积神经网络MobileNetV2,引入注意力机制CBAM(Channel Attention Module)来提高网络对特征的感知能力,并改进ASPP(Atrous Spatial Pyramid Pooling)使其能编码多尺度信息。应用自制数据集验证本文方法的有效性,并与经典DeeplabV3+、U-net、MaskR-CNN算法进行对比分析。结果表明:本文算法精确率、准确率、召回率、平均交并比分别为94.57%、94.43%、93.49%、90.24%,训练时长6.5 h,单张图像预测时长51.78 ms,模型大小为23 MB,均优于其他三种算法。本文算法在提高对轨道区域图像分割性能的同时,增强了模型的训练和检测效率,具有运用于地铁轨道区域识别的可行性和实用性。 展开更多
关键词 地铁 轨道区域识别 深度学习 语义分割 deeplabV3+算法
在线阅读 下载PDF
A Novel Optimization Algorithm for Calibrating Pollutant Degradation Coefficient in Deep Tunnel Based on Storm Water Management Model
4
作者 Kaiyuan Zheng Ying Zhang 《Journal of Geoscience and Environment Protection》 2024年第12期207-217,共11页
Aiming at working out more accurate pollutant degradation coefficient of the deep tunnel system, this work puts forward a novel optimized algorithm to calibrate such coefficient and compare it with the ordinary fittin... Aiming at working out more accurate pollutant degradation coefficient of the deep tunnel system, this work puts forward a novel optimized algorithm to calibrate such coefficient and compare it with the ordinary fitting method. This algorithm incorporates the outlier filtration mechanism and the gradient descent mechanism to improve its performance, and the calibration result is substituted into storm water management model (SWMM) source codes to validate its effectiveness between simulated and observed data. COD, NH3-N, TN and TP are chosen as pollutant indicators of the observed data, and the RMSE, MSE and ME are selected as indicators to present the efficiency. The results show that the outlier filtration mechanism obtains better performance than fitting method, with the gradient descent mechanism nearly reduces 92.42% of the iterative amounts and improves 55 times of the computation efficiency than the ordinary iterative method, such algorithm is expected to function better with substantial observed data. 展开更多
关键词 SWMM Pollutant Degradation Coefficient deep Tunnel System Optimized algorithm
在线阅读 下载PDF
基于改进Deep Labv3+算法在矿山岩体阴影数字化研究
5
作者 薛天山 李永强 +3 位作者 郝跃 贺东东 武国鹏 白怡明 《非金属矿》 2025年第2期87-91,共5页
为了准确获取岩体特征信息,提出一种基于改进Deep Labv3+算法的矿山岩体阴影数字化方法。针对Deep Labv3+算法的特征提取网络,运用轻量级的MobileNetV2网络替换原来的Xception以减少参数计算量,提高计算速度,并动态调整通道权重。通过... 为了准确获取岩体特征信息,提出一种基于改进Deep Labv3+算法的矿山岩体阴影数字化方法。针对Deep Labv3+算法的特征提取网络,运用轻量级的MobileNetV2网络替换原来的Xception以减少参数计算量,提高计算速度,并动态调整通道权重。通过改进后的算法实现矿山岩体结构面阴影的自动识别及数字化处理,包括结构面阴影骨架提取、交点消除、长度计算等。结果表明,改进Deep Labv3+算法的像素准确率最大值达93.65%,平均值为87.65%,类别准确率平均值为89.12%,平均交并比为79.34%,均优于对比算法。矿山岩体结构面阴影数字化处理结果偏差小,可靠性高,能较好反映实际测量情况。该方法提升了矿山岩体阴影数字化的有效性与可靠性,为矿山岩体处理提供了新的有力途径。 展开更多
关键词 改进deep Labv3+算法 矿山 岩石工程 图像分割 结构面阴影
在线阅读 下载PDF
基于改进DeepLabv3+卷积神经网络的废钢智能判定算法
6
作者 吉孟扬 施凯旋 +1 位作者 郭宇 杨博晟 《轧钢》 北大核心 2025年第5期150-158,共9页
废钢等级判定是实现钢铁合理循环利用的关键环节。针对现有废钢判定方法检测精度不足、效率较低等问题,本文提出了一种基于改进DeepLabv3+卷积神经网络的废钢智能判定算法,该算法在空洞空间金字塔池化(ASPP)层后增加混合注意力机制,并... 废钢等级判定是实现钢铁合理循环利用的关键环节。针对现有废钢判定方法检测精度不足、效率较低等问题,本文提出了一种基于改进DeepLabv3+卷积神经网络的废钢智能判定算法,该算法在空洞空间金字塔池化(ASPP)层后增加混合注意力机制,并使用深度条带空洞卷积代替ASPP层中部分空洞卷积;通过构建不同料型、不同视角、不同时间段等实际场景的废钢堆图像数据集,训练获得了废钢智能判定模型。改进型算法能有效提升网络的检测精度,在以ResNet作为主干网的对照组中,平均交并比m_(IoU)提升约2.54%,在以Xception作为主干网的对照组中,m_(IoU)提升约4.42%,有效提高了废钢语义分割精度;通过厚度和距离两因素建立转换模型,完成各类废钢在图片中占据的像素点占比到实际质量占比的转换,并使用全连接网络方式将算法得出的结果和工人实际结果进行拟合。本文使用大量数据对所提出的模型进行实验,实验结果表明:本文模型判定精度能够达到93.75%,明显优于现有方法,并且能够满足实际生产需要。 展开更多
关键词 废钢 深度学习 语义分割 deepLabv3+卷积神经网络 智能算法
原文传递
基于Deep Forest算法的对虾急性肝胰腺坏死病(AHPND)预警数学模型构建 被引量:1
7
作者 王印庚 于永翔 +5 位作者 蔡欣欣 张正 王春元 廖梅杰 朱洪洋 李昊 《渔业科学进展》 CSCD 北大核心 2024年第3期171-181,共11页
为预报池塘养殖凡纳对虾(Penaeus vannamei)急性肝胰腺坏死病(AHPND)的发生,自2020年开始,笔者对凡纳对虾养殖区开展了连续监测工作,包括与疾病发生相关的环境理化因子、微生物因子、虾体自身健康状况等18个候选预警因子指标,通过数据... 为预报池塘养殖凡纳对虾(Penaeus vannamei)急性肝胰腺坏死病(AHPND)的发生,自2020年开始,笔者对凡纳对虾养殖区开展了连续监测工作,包括与疾病发生相关的环境理化因子、微生物因子、虾体自身健康状况等18个候选预警因子指标,通过数据标准化处理后分析病原、宿主与环境之间的相关性,对候选预警因子进行筛选,基于Python语言编程结合Deep Forest、Light GBM、XGBoost算法进行数据建模和预测性能评判,仿真环境为Python2.7,以预警因子指标作为输入样本(即警兆),以对虾是否发病指标作为输出结果(即警情),根据输入样本和输出结果各自建立输入数据矩阵和目标数据矩阵,利用原始数据矩阵对输入样本进行初始化,结合函数方程进行拟合,拟合的源代码能利用已知环境、病原及对虾免疫指标数据对目标警情进行预测。最终建立了基于Deep Forest算法的虾体(肝胰腺内)细菌总数、虾体弧菌(Vibrio)占比、水体细菌总数和盐度的4维向量预警预报模型,准确率达89.00%。本研究将人工智能算法应用到对虾AHPND发生的预测预报,相关研究结果为对虾AHPND疾病预警预报建立了预警数学模型,并为对虾健康养殖和疾病防控提供了技术支撑和有力保障。 展开更多
关键词 对虾 急性肝胰腺坏死病 预警数学模型 deep Forest算法 PYTHON语言
在线阅读 下载PDF
Deep neural network algorithm for estimating maize biomass based on simulated Sentinel 2A vegetation indices and leaf area index 被引量:13
8
作者 Xiuliang Jin Zhenhai Li +2 位作者 Haikuan Feng Zhibin Ren Shaokun Li 《The Crop Journal》 SCIE CAS CSCD 2020年第1期87-97,共11页
Accurate estimation of biomass is necessary for evaluating crop growth and predicting crop yield.Biomass is also a key trait in increasing grain yield by crop breeding.The aims of this study were(i)to identify the bes... Accurate estimation of biomass is necessary for evaluating crop growth and predicting crop yield.Biomass is also a key trait in increasing grain yield by crop breeding.The aims of this study were(i)to identify the best vegetation indices for estimating maize biomass,(ii)to investigate the relationship between biomass and leaf area index(LAI)at several growth stages,and(iii)to evaluate a biomass model using measured vegetation indices or simulated vegetation indices of Sentinel 2A and LAI using a deep neural network(DNN)algorithm.The results showed that biomass was associated with all vegetation indices.The three-band water index(TBWI)was the best vegetation index for estimating biomass and the corresponding R2,RMSE,and RRMSE were 0.76,2.84 t ha−1,and 38.22%respectively.LAI was highly correlated with biomass(R2=0.89,RMSE=2.27 t ha−1,and RRMSE=30.55%).Estimated biomass based on 15 hyperspectral vegetation indices was in a high agreement with measured biomass using the DNN algorithm(R2=0.83,RMSE=1.96 t ha−1,and RRMSE=26.43%).Biomass estimation accuracy was further increased when LAI was combined with the 15 vegetation indices(R2=0.91,RMSE=1.49 t ha−1,and RRMSE=20.05%).Relationships between the hyperspectral vegetation indices and biomass differed from relationships between simulated Sentinel 2A vegetation indices and biomass.Biomass estimation from the hyperspectral vegetation indices was more accurate than that from the simulated Sentinel 2A vegetation indices(R2=0.87,RMSE=1.84 t ha−1,and RRMSE=24.76%).The DNN algorithm was effective in improving the estimation accuracy of biomass.It provides a guideline for estimating biomass of maize using remote sensing technology and the DNN algorithm in this region. 展开更多
关键词 Biomass estimation MAIZE Vegetation indices deep neural network algorithm LAI
在线阅读 下载PDF
Deep Learning and Holt-Trend Algorithms for Predicting Covid-19 Pandemic 被引量:3
9
作者 Theyazn H.H.Aldhyani Melfi Alrasheed +3 位作者 Mosleh Hmoud Al-Adaileh Ahmed Abdullah Alqarni Mohammed Y.Alzahrani Ahmed H.Alahmadi 《Computers, Materials & Continua》 SCIE EI 2021年第5期2141-2160,共20页
The Covid-19 epidemic poses a serious public health threat to the world,where people with little or no pre-existing human immunity can be more vulnerable to its effects.Thus,developing surveillance systems for predict... The Covid-19 epidemic poses a serious public health threat to the world,where people with little or no pre-existing human immunity can be more vulnerable to its effects.Thus,developing surveillance systems for predicting the Covid-19 pandemic at an early stage could save millions of lives.In this study,a deep learning algorithm and a Holt-trend model are proposed to predict the coronavirus.The Long-Short Term Memory(LSTM)and Holttrend algorithms were applied to predict confirmed numbers and death cases.The real time data used has been collected from theWorld Health Organization(WHO).In the proposed research,we have considered three countries to test the proposed model,namely Saudi Arabia,Spain and Italy.The results suggest that the LSTM models show better performance in predicting the cases of coronavirus patients.Standard measure performance Mean squared Error(MSE),Root Mean Squared Error(RMSE),Mean error and correlation are employed to estimate the results of the proposed models.The empirical results of the LSTM,using the correlation metrics,are 99.94%,99.94%and 99.91%in predicting the number of confirmed cases in the three countries.As far as the results of the LSTM model in predicting the number of death of Covid-19,they are 99.86%,98.876%and 99.16%with respect to Saudi Arabia,Italy and Spain respectively.Similarly,the experiment’s results of the Holt-Trend model in predicting the number of confirmed cases of Covid-19,using the correlation metrics,are 99.06%,99.96%and 99.94%,whereas the results of the Holt-Trend model in predicting the number of death cases are 99.80%,99.96%and 99.94%with respect to the Saudi Arabia,Italy and Spain respectively.The empirical results indicate the efficient performance of the presented model in predicting the number of confirmed and death cases of Covid-19 in these countries.Such findings provide better insights regarding the future of Covid-19 this pandemic in general.The results were obtained by applying time series models,which need to be considered for the sake of saving the lives of many people. 展开更多
关键词 deep learning algorithm holt-trend prediction Covid-19 machine learning
在线阅读 下载PDF
Power System Resiliency and Wide Area Control Employing Deep Learning Algorithm 被引量:1
10
作者 Pandia Rajan Jeyaraj Aravind Chellachi Kathiresan +3 位作者 Siva Prakash Asokan Edward Rajan Samuel Nadar Hegazy Rezk Thanikanti Sudhakar Babu 《Computers, Materials & Continua》 SCIE EI 2021年第7期553-567,共15页
The power transfer capability of the smart transmission gridconnected networks needs to be reduced by inter-area oscillations.Due to the fact that inter-area modes of oscillations detain and make instability of power ... The power transfer capability of the smart transmission gridconnected networks needs to be reduced by inter-area oscillations.Due to the fact that inter-area modes of oscillations detain and make instability of power transmission networks.This fact is more noticeable in smart grid-connected systems.The smart grid infrastructure has more renewable energy resources installed for its operation.To overcome this problem,a deep learning widearea controller is proposed for real-time parameter control and smart power grid resilience on oscillations inter-area modes.The proposed Deep Wide Area Controller(DWAC)uses the Deep Belief Network(DBN).The network weights are updated based on real-time data from Phasor measurement units.Resilience assessment based on failure probability,financial impact,and time-series data in grid failure management determine the norm H2.To demonstrate the effectiveness of the proposed framework,a time-domain simulation case study based on the IEEE-39 bus system was performed.For a one-channel attack on the test system,the resiliency index increased to 0.962,and inter-area dampingξwas reduced to 0.005.The obtained results validate the proposed deep learning algorithm’s efficiency on damping inter-area and local oscillation on the 2-channel attack as well.Results also offer robust management of power system resilience and timely control of the operating conditions. 展开更多
关键词 Neural network deep learning algorithm low-frequency oscillation resiliency assessment smart grid wide-area control
在线阅读 下载PDF
Identification of High-Risk Scenarios for Cascading Failures in New Energy Power Grids Based on Deep Embedding Clustering Algorithms 被引量:1
11
作者 Xueting Cheng Ziqi Zhang +1 位作者 Yueshuang Bao Huiping Zheng 《Energy Engineering》 EI 2023年第11期2517-2529,共13页
At present,the proportion of new energy in the power grid is increasing,and the random fluctuations in power output increase the risk of cascading failures in the power grid.In this paper,we propose a method for ident... At present,the proportion of new energy in the power grid is increasing,and the random fluctuations in power output increase the risk of cascading failures in the power grid.In this paper,we propose a method for identifying high-risk scenarios of interlocking faults in new energy power grids based on a deep embedding clustering(DEC)algorithm and apply it in a risk assessment of cascading failures in different operating scenarios for new energy power grids.First,considering the real-time operation status and system structure of new energy power grids,the scenario cascading failure risk indicator is established.Based on this indicator,the risk of cascading failure is calculated for the scenario set,the scenarios are clustered based on the DEC algorithm,and the scenarios with the highest indicators are selected as the significant risk scenario set.The results of simulations with an example power grid show that our method can effectively identify scenarios with a high risk of cascading failures from a large number of scenarios. 展开更多
关键词 New energy power system deep embedding clustering algorithms cascading failures
在线阅读 下载PDF
Weighted adaptive filtering algorithm for carrier tracking of deep space signal 被引量:8
12
作者 Song Qingping Liu Rongke 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2015年第4期1236-1244,共9页
Carrier tracking is laid great emphasis and is the difficulty of signal processing in deep space communication system.For the autonomous radio receiving system in deep space, the tracking of the received signal is aut... Carrier tracking is laid great emphasis and is the difficulty of signal processing in deep space communication system.For the autonomous radio receiving system in deep space, the tracking of the received signal is automatic when the signal to noise ratio(SNR) is unknown.If the frequency-locked loop(FLL) or the phase-locked loop(PLL) with fixed loop bandwidth, or Kalman filter with fixed noise variance is adopted, the accretion of estimation error and filter divergence may be caused.Therefore, the Kalman filter algorithm with adaptive capability is adopted to suppress filter divergence.Through analyzing the inadequacies of Sage–Husa adaptive filtering algorithm, this paper introduces a weighted adaptive filtering algorithm for autonomous radio.The introduced algorithm may resolve the defect of Sage–Husa adaptive filtering algorithm that the noise covariance matrix is negative definite in filtering process.In addition, the upper diagonal(UD) factorization and innovation adaptive control are used to reduce model estimation errors,suppress filter divergence and improve filtering accuracy.The simulation results indicate that compared with the Sage–Husa adaptive filtering algorithm, this algorithm has better capability to adapt to the loop, convergence performance and tracking accuracy, which contributes to the effective and accurate carrier tracking in low SNR environment, showing a better application prospect. 展开更多
关键词 Adaptive algorithms Carrier tracking deep space communicationKalman filters Tracking accuracy WEIGHTED
原文传递
基于场因子分解的xDeepFM推荐模型
13
作者 李子杰 张姝 +2 位作者 欧阳昭相 王俊 吴迪 《应用科学学报》 CAS CSCD 北大核心 2024年第3期513-524,共12页
极深因子分解机(eXtreme deep factorization machine,xDeepFM)是一种基于上下文感知的推荐模型,它提出了一种压缩交叉网络对特征进行阶数可控的特征交叉,并将该网络与深度神经网络进行结合以优化推荐效果。为了进一步提升xDeepFM在推... 极深因子分解机(eXtreme deep factorization machine,xDeepFM)是一种基于上下文感知的推荐模型,它提出了一种压缩交叉网络对特征进行阶数可控的特征交叉,并将该网络与深度神经网络进行结合以优化推荐效果。为了进一步提升xDeepFM在推荐场景下的表现,提出一种基于场因子分解的xDeepFM改进模型。该模型通过场信息增强了特征的表达能力,并建立了多个交叉压缩网络以学习高阶组合特征。最后分析了用户场、项目场设定的合理性,并在3个不同规模的MovieLens系列数据集上通过受试者工作特征曲线下面积、对数似然损失指标进行性能评估,验证了该改进模型的有效性。 展开更多
关键词 推荐算法 极深因子分解机 场因子分解 深度学习
在线阅读 下载PDF
Optimizing Deep Learning Parameters Using Genetic Algorithm for Object Recognition and Robot Grasping 被引量:2
14
作者 Delowar Hossain Genci Capi Mitsuru Jindai 《Journal of Electronic Science and Technology》 CAS CSCD 2018年第1期11-15,共5页
The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We... The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We propose a genetic algorithm(GA) based deep belief neural network(DBNN) method for robot object recognition and grasping purpose. This method optimizes the parameters of the DBNN method, such as the number of hidden units, the number of epochs, and the learning rates, which would reduce the error rate and the network training time of object recognition. After recognizing objects, the robot performs the pick-andplace operations. We build a database of six objects for experimental purpose. Experimental results demonstrate that our method outperforms on the optimized robot object recognition and grasping tasks. 展开更多
关键词 deep learning(DL) deep belief neural network(DBNN) genetic algorithm(GA) object recognition robot grasping
在线阅读 下载PDF
Genetic algorithm in seismic waveform inversion and its application in deep seismic sounding data interpretation 被引量:1
15
作者 王夫运 张先康 《Acta Seismologica Sinica(English Edition)》 EI CSCD 2006年第2期163-172,共10页
A genetic algorithm of body waveform inversion is presented for better understanding of crustal and upper mantle structures with deep seismic sounding (DSS) waveform data. General reflection and transmission synthet... A genetic algorithm of body waveform inversion is presented for better understanding of crustal and upper mantle structures with deep seismic sounding (DSS) waveform data. General reflection and transmission synthetic seismogram algorithm, which is capable of calculating the response of thin alternating high and low velocity layers, is applied as a solution for forward modeling, and the genetic algorithm is used to find the optimal solution of the inverse problem. Numerical tests suggest that the method has the capability of resolving low-velocity layers, thin alternating high and low velocity layers, and noise suppression. Waveform inversion using P-wave records from Zeku, Xiahe and Lintao shots in the seismic wide-angle reflection/refraction survey along northeastern Qinghai-Xizang (Tibeteau) Plateau has revealed fine structures of the bottom of the upper crust and alternating layers in the middle/lower crust and topmost upper mantle. 展开更多
关键词 genetic algorithm waveform inversion numerical test deep seismic sounding fine crustal structure
在线阅读 下载PDF
Directional Routing Algorithm for Deep Space Optical Network
16
作者 Lei Guo Xiaorui Wang +3 位作者 Yejun Liu Pengchao Han Yamin Xie Yuchen Tan 《China Communications》 SCIE CSCD 2017年第1期158-168,共11页
With the development of science, economy and society, the needs for research and exploration of deep space have entered a rapid and stable development stage. Deep Space Optical Network(DSON) is expected to become an i... With the development of science, economy and society, the needs for research and exploration of deep space have entered a rapid and stable development stage. Deep Space Optical Network(DSON) is expected to become an important foundation and inevitable development trend of future deepspace communication. In this paper, we design a deep space node model which is capable of combining the space division multiplexing with frequency division multiplexing. Furthermore, we propose the directional flooding routing algorithm(DFRA) for DSON based on our node model. This scheme selectively forwards the data packets in the routing, so that the energy consumption can be reduced effectively because only a portion of nodes will participate the flooding routing. Simulation results show that, compared with traditional flooding routing algorithm(TFRA), the DFRA can avoid the non-directional and blind transmission. Therefore, the energy consumption in message routing will be reduced and the lifespan of DSON can also be prolonged effectively. Although the complexity of routing implementation is slightly increased compared with TFRA, the energy of nodes can be saved and the transmission rate is obviously improved in DFRA. Thus the overall performance of DSON can be significantly improved. 展开更多
关键词 deep space optical network routing algorithm directional flooding routing algorithm traditional flooding routing algorithm
在线阅读 下载PDF
基于PID搜索优化的CNN-LSTM-Attention铝电解槽电解温度预测方法研究 被引量:4
17
作者 尹刚 朱淼 +2 位作者 全鹏程 颜玥涵 刘期烈 《仪器仪表学报》 北大核心 2025年第1期324-337,共14页
铝电解生产环境恶劣,受电场、磁场、流场、温度场等多物理场耦合影响,导致铝电解生产过程故障频发。铝电解温度是影响铝电解槽寿命和运行状态的重要参数,但由于槽内温度很高且具有强烈腐蚀性,至今尚未找到有效的电解温度在线检测与预测... 铝电解生产环境恶劣,受电场、磁场、流场、温度场等多物理场耦合影响,导致铝电解生产过程故障频发。铝电解温度是影响铝电解槽寿命和运行状态的重要参数,但由于槽内温度很高且具有强烈腐蚀性,至今尚未找到有效的电解温度在线检测与预测方法。为了解决这一技术难题,通过理论分析结合现场实验验证,揭示了铝电解槽电解温度与其工艺参数间的密切相关性,并据此提出一种基于深度学习的铝电解槽电解温度预测模型。考虑到铝电解槽工艺参数的复杂性、非线性、高维度、时序性等特征,采用卷积神经网络(CNN)用于提取数据的高维特征,长短期记忆网络用于建模(LSTM),处理铝电解生产过程中的时序数据,引入了注意力机制(Attention),学习输入参数不同部分之间的关联性,同时根据输入数据的重要程度进行加权处理,并采用PID搜索优化算法(PSA)对CNN-LSTM-Attention模型的参数进行寻优,减少训练时间并提高模型的性能。最后经铝电解实际生产数据进行现场实验验证,结果表明:提出的温度预测模型相关指数(R~2)为0.963 7,均方根误差(RMSE)和平均绝对误差(MAE)分别为5.417 6和3.382 5,与单一模型算法、其他预测算法和不同优化算法对比验证表明,该模型的性能更佳,能够准确预测铝电解槽电解温度,实现了铝电解槽电解温度的在线检测。 展开更多
关键词 铝电解 算法 电解温度 深度学习 过程控制
原文传递
一种基于机器学习的井间水驱优势通道识别方法 被引量:3
18
作者 杨二龙 陈柄君 +2 位作者 董驰 曾傲 张梓彤 《钻采工艺》 北大核心 2025年第1期157-164,共8页
井间优势渗流通道的形成受多方面的因素综合影响,识别过程中需要分析的因素众多、过程复杂,最直观可靠的做法是通过剖面测试数据结合生产动态分析来判定,或者通过措施见效井来验证是否存在优势渗流通道,但是实际生产中剖面测试数据量不... 井间优势渗流通道的形成受多方面的因素综合影响,识别过程中需要分析的因素众多、过程复杂,最直观可靠的做法是通过剖面测试数据结合生产动态分析来判定,或者通过措施见效井来验证是否存在优势渗流通道,但是实际生产中剖面测试数据量不足,措施见效井分析结果又属于后验知识,时效性差,导致识别的精度和效率较低。因此,本文以大庆油田特高含水典型区块M区块为例,结合主控因素分析方法构建特征参数集,应用粒子群算法(PSO)优化深度置信神经网络(DBN)的结构参数,通过逐层递推和全局优化融合、有监督和无监督学习算法融合提升模型性能,形成了一种基于机器学习算法的注采井间优势通道识别的方法。构建的优势通道识别PSO-DBN模型应用于典型区块,识别准确率比未经过优化的DBN神经网络模型预测准确率提高了2.8%,比MLP神经网络模型预测准确率提高了8.6%,通过增补无标注样本、实现有监督和无监督学习算法融合,可以进一步提升识别精度。 展开更多
关键词 特高含水油藏 井间优势通道 深度置信神经网络 算法融合 机器学习
在线阅读 下载PDF
基于改进YOLOv7-tiny的车辆目标检测算法 被引量:3
19
作者 赵海丽 许修常 潘宇航 《兵工学报》 北大核心 2025年第4期101-111,共11页
为更好地保护人民的生命财产安全,针对目前依靠人力进行交通管理工作时统计不准确、反馈不及时等问题,提出一种适合部署在边缘终端设备上的基于YOLOv7-tiny算法改进的车辆目标检测算法。通过构造深度强力残差卷积块对主干网络的轻量级... 为更好地保护人民的生命财产安全,针对目前依靠人力进行交通管理工作时统计不准确、反馈不及时等问题,提出一种适合部署在边缘终端设备上的基于YOLOv7-tiny算法改进的车辆目标检测算法。通过构造深度强力残差卷积块对主干网络的轻量级高效层聚合网络(Efficient Layer Aggregation Network-Tiny,ELAN-T)模块进行轻量化改进;通过削减分支,对特征融合网络的ELAN-T模块进行轻量化改进,降低网络的参数量和计算量,并对特征融合网络的结构进行重新构造;引入高效通道注意力机制和EIOU边界框损失函数提升算法的精度。在预处理后的UA-DETRAC数据集上实验,改进后的算法参数量相比于原始的YOLOv7-tiny算法降低了15.1%,计算量降低了5.3%,mAP@0.5提升了5.3个百分点。实验结果表明,改进后的算法不仅实现了轻量化,而且检测精度有所提升,适合部署在边缘终端设备上,完成对道路中车辆的检测任务。 展开更多
关键词 车辆检测 YOLOv7-tiny算法 深度强力残差卷积块 轻量级高效层聚合网络模块
在线阅读 下载PDF
深度强化学习求解动态柔性作业车间调度问题 被引量:1
20
作者 杨丹 舒先涛 +3 位作者 余震 鲁光涛 纪松霖 王家兵 《现代制造工程》 北大核心 2025年第2期10-16,共7页
随着智慧车间等智能制造技术的不断发展,人工智能算法在解决车间调度问题上的研究备受关注,其中车间运行过程中的动态事件是影响调度效果的一个重要扰动因素,为此提出一种采用深度强化学习方法来解决含有工件随机抵达的动态柔性作业车... 随着智慧车间等智能制造技术的不断发展,人工智能算法在解决车间调度问题上的研究备受关注,其中车间运行过程中的动态事件是影响调度效果的一个重要扰动因素,为此提出一种采用深度强化学习方法来解决含有工件随机抵达的动态柔性作业车间调度问题。首先以最小化总延迟为目标建立动态柔性作业车间的数学模型,然后提取8个车间状态特征,建立6个复合型调度规则,采用ε-greedy动作选择策略并对奖励函数进行设计,最后利用先进的D3QN算法进行求解并在不同规模车间算例上进行了有效性验证。结果表明,提出的D3QN算法能非常有效地解决含有工件随机抵达的动态柔性作业车间调度问题,在所有车间算例中的求优胜率为58.3%,相较于传统的DQN和DDQN算法车间延迟分别降低了11.0%和15.4%,进一步提升车间的生产制造效率。 展开更多
关键词 深度强化学习 D3QN算法 工件随机抵达 柔性作业车间调度 动态调度
在线阅读 下载PDF
上一页 1 2 155 下一页 到第
使用帮助 返回顶部