期刊文献+
共找到3,594篇文章
< 1 2 180 >
每页显示 20 50 100
Rapid pathologic grading-based diagnosis of esophageal squamous cell carcinoma via Raman spectroscopy and a deep learning algorithm 被引量:1
1
作者 Xin-Ying Yu Jian Chen +2 位作者 Lian-Yu Li Feng-En Chen Qiang He 《World Journal of Gastroenterology》 2025年第14期32-46,共15页
BACKGROUND Esophageal squamous cell carcinoma is a major histological subtype of esophageal cancer.Many molecular genetic changes are associated with its occurrence.Raman spectroscopy has become a new method for the e... BACKGROUND Esophageal squamous cell carcinoma is a major histological subtype of esophageal cancer.Many molecular genetic changes are associated with its occurrence.Raman spectroscopy has become a new method for the early diagnosis of tumors because it can reflect the structures of substances and their changes at the molecular level.AIM To detect alterations in Raman spectral information across different stages of esophageal neoplasia.METHODS Different grades of esophageal lesions were collected,and a total of 360 groups of Raman spectrum data were collected.A 1D-transformer network model was proposed to handle the task of classifying the spectral data of esophageal squamous cell carcinoma.In addition,a deep learning model was applied to visualize the Raman spectral data and interpret their molecular characteristics.RESULTS A comparison among Raman spectral data with different pathological grades and a visual analysis revealed that the Raman peaks with significant differences were concentrated mainly at 1095 cm^(-1)(DNA,symmetric PO,and stretching vibration),1132 cm^(-1)(cytochrome c),1171 cm^(-1)(acetoacetate),1216 cm^(-1)(amide III),and 1315 cm^(-1)(glycerol).A comparison among the training results of different models revealed that the 1Dtransformer network performed best.A 93.30%accuracy value,a 96.65%specificity value,a 93.30%sensitivity value,and a 93.17%F1 score were achieved.CONCLUSION Raman spectroscopy revealed significantly different waveforms for the different stages of esophageal neoplasia.The combination of Raman spectroscopy and deep learning methods could significantly improve the accuracy of classification. 展开更多
关键词 Raman spectroscopy Esophageal neoplasia Early diagnosis deep learning algorithm Rapid pathologic grading
暂未订购
Intelligent Voltage Control Method in Active Distribution Networks Based on Averaged Weighted Double Deep Q-network Algorithm 被引量:1
2
作者 Yangyang Wang Meiqin Mao +1 位作者 Liuchen Chang Nikos D.Hatziargyriou 《Journal of Modern Power Systems and Clean Energy》 SCIE EI CSCD 2023年第1期132-143,共12页
High penetration of distributed renewable energy sources and electric vehicles(EVs)makes future active distribution network(ADN)highly variable.These characteristics put great challenges to traditional voltage control... High penetration of distributed renewable energy sources and electric vehicles(EVs)makes future active distribution network(ADN)highly variable.These characteristics put great challenges to traditional voltage control methods.Voltage control based on the deep Q-network(DQN)algorithm offers a potential solution to this problem because it possesses humanlevel control performance.However,the traditional DQN methods may produce overestimation of action reward values,resulting in degradation of obtained solutions.In this paper,an intelligent voltage control method based on averaged weighted double deep Q-network(AWDDQN)algorithm is proposed to overcome the shortcomings of overestimation of action reward values in DQN algorithm and underestimation of action reward values in double deep Q-network(DDQN)algorithm.Using the proposed method,the voltage control objective is incorporated into the designed action reward values and normalized to form a Markov decision process(MDP)model which is solved by the AWDDQN algorithm.The designed AWDDQN-based intelligent voltage control agent is trained offline and used as online intelligent dynamic voltage regulator for the ADN.The proposed voltage control method is validated using the IEEE 33-bus and 123-bus systems containing renewable energy sources and EVs,and compared with the DQN and DDQN algorithms based methods,and traditional mixed-integer nonlinear program based methods.The simulation results show that the proposed method has better convergence and less voltage volatility than the other ones. 展开更多
关键词 Averaged weighted double deep q-network(AWDDQN) deep Q learning active distribution network(ADN) voltage control electrical vehicle(EV)
原文传递
Resource Allocation in V2X Networks:A Double Deep Q-Network Approach with Graph Neural Networks
3
作者 Zhengda Huan Jian Sun +3 位作者 Zeyu Chen Ziyi Zhang Xiao Sun Zenghui Xiao 《Computers, Materials & Continua》 2025年第9期5427-5443,共17页
With the advancement of Vehicle-to-Everything(V2X)technology,efficient resource allocation in dynamic vehicular networks has become a critical challenge for achieving optimal performance.Existing methods suffer from h... With the advancement of Vehicle-to-Everything(V2X)technology,efficient resource allocation in dynamic vehicular networks has become a critical challenge for achieving optimal performance.Existing methods suffer from high computational complexity and decision latency under high-density traffic and heterogeneous network conditions.To address these challenges,this study presents an innovative framework that combines Graph Neural Networks(GNNs)with a Double Deep Q-Network(DDQN),utilizing dynamic graph structures and reinforcement learning.An adaptive neighbor sampling mechanism is introduced to dynamically select the most relevant neighbors based on interference levels and network topology,thereby improving decision accuracy and efficiency.Meanwhile,the framework models communication links as nodes and interference relationships as edges,effectively capturing the direct impact of interference on resource allocation while reducing computational complexity and preserving critical interaction information.Employing an aggregation mechanism based on the Graph Attention Network(GAT),it dynamically adjusts the neighbor sampling scope and performs attention-weighted aggregation based on node importance,ensuring more efficient and adaptive resource management.This design ensures reliable Vehicle-to-Vehicle(V2V)communication while maintaining high Vehicle-to-Infrastructure(V2I)throughput.The framework retains the global feature learning capabilities of GNNs and supports distributed network deployment,allowing vehicles to extract low-dimensional graph embeddings from local observations for real-time resource decisions.Experimental results demonstrate that the proposed method significantly reduces computational overhead,mitigates latency,and improves resource utilization efficiency in vehicular networks under complex traffic scenarios.This research not only provides a novel solution to resource allocation challenges in V2X networks but also advances the application of DDQN in intelligent transportation systems,offering substantial theoretical significance and practical value. 展开更多
关键词 Resource allocation V2X double deep q-network graph neural network
在线阅读 下载PDF
Algorithmic crypto trading using information‑driven bars,triple barrier labeling and deep learning
4
作者 Przemysław Grądzki Piotr Wojcik Stefan Lessmann 《Financial Innovation》 2025年第1期3979-4021,共43页
This paper investigates the optimization of data sampling and target labeling techniques to enhance algorithmic trading strategies in cryptocurrency markets,focusing on Bitcoin(BTC)and Ethereum(ETH).Traditional data s... This paper investigates the optimization of data sampling and target labeling techniques to enhance algorithmic trading strategies in cryptocurrency markets,focusing on Bitcoin(BTC)and Ethereum(ETH).Traditional data sampling methods,such as time bars,often fail to capture the nuances of the continuously active and highly volatile cryptocurrency market and force traders to wait for arbitrary points in time.To address this,we propose an alternative approach using information-driven sampling methods,including the CUSUM filter,range bars,volume bars,and dollar bars,and evaluate their performance using tick-level data from January 2018 to June 2023.Additionally,we introduce the Triple Barrier method for target labeling,which offers a solution tailored for algorithmic trading as opposed to the widely used next-bar prediction.We empirically assess the effectiveness of these data sampling and labeling methods to craft profitable trading strategies.The results demonstrate that the innovative combination of CUSUM-filtered data with Triple Barrier labeling outperforms traditional time bars and next-bar prediction,achieving consistently positive trading performance even after accounting for transaction costs.Moreover,our system enables making trading decisions at any point in time on the basis of market conditions,providing an advantage over traditional methods that rely on fixed time intervals.Furthermore,the paper contributes to the ongoing debate on the applicability of Transformer models to time series classification in the context of algorithmic trading by evaluating various Transformer architectures—including the vanilla Transformer encoder,FEDformer,and Autoformer—alongside other deep learning architectures and classical machine learning models,revealing insights into their relative performance. 展开更多
关键词 Cryptocurrencies algorithmic trading deep learning Information-driven bars Triple barrier method
在线阅读 下载PDF
Deep Learning Mixed Hyper-Parameter Optimization Based on Improved Cuckoo Search Algorithm
5
作者 TONG Yu CHEN Rong HU Biling 《Wuhan University Journal of Natural Sciences》 2025年第2期195-204,共10页
Deep learning algorithm is an effective data mining method and has been used in many fields to solve practical problems.However,the deep learning algorithms often contain some hyper-parameters which may be continuous,... Deep learning algorithm is an effective data mining method and has been used in many fields to solve practical problems.However,the deep learning algorithms often contain some hyper-parameters which may be continuous,integer,or mixed,and are often given based on experience but largely affect the effectiveness of activity recognition.In order to adapt to different hyper-parameter optimization problems,our improved Cuckoo Search(CS)algorithm is proposed to optimize the mixed hyper-parameters in deep learning algorithm.The algorithm optimizes the hyper-parameters in the deep learning model robustly,and intelligently selects the combination of integer type and continuous hyper-parameters that make the model optimal.Then,the mixed hyper-parameter in Convolutional Neural Network(CNN),Long-Short-Term Memory(LSTM)and CNN-LSTM are optimized based on the methodology on the smart home activity recognition datasets.Results show that the methodology can improve the performance of the deep learning model and whether we are experienced or not,we can get a better deep learning model using our method. 展开更多
关键词 improved Cuckoo Search algorithm mixed hyper-parameter OPTIMIZATION deep learning
原文传递
Energy Optimization for Autonomous Mobile Robot Path Planning Based on Deep Reinforcement Learning
6
作者 Longfei Gao Weidong Wang Dieyun Ke 《Computers, Materials & Continua》 2026年第1期984-998,共15页
At present,energy consumption is one of the main bottlenecks in autonomous mobile robot development.To address the challenge of high energy consumption in path planning for autonomous mobile robots navigating unknown ... At present,energy consumption is one of the main bottlenecks in autonomous mobile robot development.To address the challenge of high energy consumption in path planning for autonomous mobile robots navigating unknown and complex environments,this paper proposes an Attention-Enhanced Dueling Deep Q-Network(ADDueling DQN),which integrates a multi-head attention mechanism and a prioritized experience replay strategy into a Dueling-DQN reinforcement learning framework.A multi-objective reward function,centered on energy efficiency,is designed to comprehensively consider path length,terrain slope,motion smoothness,and obstacle avoidance,enabling optimal low-energy trajectory generation in 3D space from the source.The incorporation of a multihead attention mechanism allows the model to dynamically focus on energy-critical state features—such as slope gradients and obstacle density—thereby significantly improving its ability to recognize and avoid energy-intensive paths.Additionally,the prioritized experience replay mechanism accelerates learning from key decision-making experiences,suppressing inefficient exploration and guiding the policy toward low-energy solutions more rapidly.The effectiveness of the proposed path planning algorithm is validated through simulation experiments conducted in multiple off-road scenarios.Results demonstrate that AD-Dueling DQN consistently achieves the lowest average energy consumption across all tested environments.Moreover,the proposed method exhibits faster convergence and greater training stability compared to baseline algorithms,highlighting its global optimization capability under energy-aware objectives in complex terrains.This study offers an efficient and scalable intelligent control strategy for the development of energy-conscious autonomous navigation systems. 展开更多
关键词 Autonomous mobile robot deep reinforcement learning energy optimization multi-attention mechanism prioritized experience replay dueling deep q-network
在线阅读 下载PDF
Deep neural network algorithm for estimating maize biomass based on simulated Sentinel 2A vegetation indices and leaf area index 被引量:15
7
作者 Xiuliang Jin Zhenhai Li +2 位作者 Haikuan Feng Zhibin Ren Shaokun Li 《The Crop Journal》 SCIE CAS CSCD 2020年第1期87-97,共11页
Accurate estimation of biomass is necessary for evaluating crop growth and predicting crop yield.Biomass is also a key trait in increasing grain yield by crop breeding.The aims of this study were(i)to identify the bes... Accurate estimation of biomass is necessary for evaluating crop growth and predicting crop yield.Biomass is also a key trait in increasing grain yield by crop breeding.The aims of this study were(i)to identify the best vegetation indices for estimating maize biomass,(ii)to investigate the relationship between biomass and leaf area index(LAI)at several growth stages,and(iii)to evaluate a biomass model using measured vegetation indices or simulated vegetation indices of Sentinel 2A and LAI using a deep neural network(DNN)algorithm.The results showed that biomass was associated with all vegetation indices.The three-band water index(TBWI)was the best vegetation index for estimating biomass and the corresponding R2,RMSE,and RRMSE were 0.76,2.84 t ha−1,and 38.22%respectively.LAI was highly correlated with biomass(R2=0.89,RMSE=2.27 t ha−1,and RRMSE=30.55%).Estimated biomass based on 15 hyperspectral vegetation indices was in a high agreement with measured biomass using the DNN algorithm(R2=0.83,RMSE=1.96 t ha−1,and RRMSE=26.43%).Biomass estimation accuracy was further increased when LAI was combined with the 15 vegetation indices(R2=0.91,RMSE=1.49 t ha−1,and RRMSE=20.05%).Relationships between the hyperspectral vegetation indices and biomass differed from relationships between simulated Sentinel 2A vegetation indices and biomass.Biomass estimation from the hyperspectral vegetation indices was more accurate than that from the simulated Sentinel 2A vegetation indices(R2=0.87,RMSE=1.84 t ha−1,and RRMSE=24.76%).The DNN algorithm was effective in improving the estimation accuracy of biomass.It provides a guideline for estimating biomass of maize using remote sensing technology and the DNN algorithm in this region. 展开更多
关键词 Biomass estimation MAIZE Vegetation indices deep neural network algorithm LAI
在线阅读 下载PDF
Weighted adaptive filtering algorithm for carrier tracking of deep space signal 被引量:8
8
作者 Song Qingping Liu Rongke 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2015年第4期1236-1244,共9页
Carrier tracking is laid great emphasis and is the difficulty of signal processing in deep space communication system.For the autonomous radio receiving system in deep space, the tracking of the received signal is aut... Carrier tracking is laid great emphasis and is the difficulty of signal processing in deep space communication system.For the autonomous radio receiving system in deep space, the tracking of the received signal is automatic when the signal to noise ratio(SNR) is unknown.If the frequency-locked loop(FLL) or the phase-locked loop(PLL) with fixed loop bandwidth, or Kalman filter with fixed noise variance is adopted, the accretion of estimation error and filter divergence may be caused.Therefore, the Kalman filter algorithm with adaptive capability is adopted to suppress filter divergence.Through analyzing the inadequacies of Sage–Husa adaptive filtering algorithm, this paper introduces a weighted adaptive filtering algorithm for autonomous radio.The introduced algorithm may resolve the defect of Sage–Husa adaptive filtering algorithm that the noise covariance matrix is negative definite in filtering process.In addition, the upper diagonal(UD) factorization and innovation adaptive control are used to reduce model estimation errors,suppress filter divergence and improve filtering accuracy.The simulation results indicate that compared with the Sage–Husa adaptive filtering algorithm, this algorithm has better capability to adapt to the loop, convergence performance and tracking accuracy, which contributes to the effective and accurate carrier tracking in low SNR environment, showing a better application prospect. 展开更多
关键词 Adaptive algorithms Carrier tracking deep space communicationKalman filters Tracking accuracy WEIGHTED
原文传递
Optimizing Deep Learning Parameters Using Genetic Algorithm for Object Recognition and Robot Grasping 被引量:2
9
作者 Delowar Hossain Genci Capi Mitsuru Jindai 《Journal of Electronic Science and Technology》 CAS CSCD 2018年第1期11-15,共5页
The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We... The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We propose a genetic algorithm(GA) based deep belief neural network(DBNN) method for robot object recognition and grasping purpose. This method optimizes the parameters of the DBNN method, such as the number of hidden units, the number of epochs, and the learning rates, which would reduce the error rate and the network training time of object recognition. After recognizing objects, the robot performs the pick-andplace operations. We build a database of six objects for experimental purpose. Experimental results demonstrate that our method outperforms on the optimized robot object recognition and grasping tasks. 展开更多
关键词 deep learning(DL) deep belief neural network(DBNN) genetic algorithm(GA) object recognition robot grasping
在线阅读 下载PDF
Multi-Agent Path Planning Method Based on Improved Deep Q-Network in Dynamic Environments 被引量:2
10
作者 LI Shuyi LI Minzhe JING Zhongliang 《Journal of Shanghai Jiaotong university(Science)》 EI 2024年第4期601-612,共12页
The multi-agent path planning problem presents significant challenges in dynamic environments,primarily due to the ever-changing positions of obstacles and the complex interactions between agents’actions.These factor... The multi-agent path planning problem presents significant challenges in dynamic environments,primarily due to the ever-changing positions of obstacles and the complex interactions between agents’actions.These factors contribute to a tendency for the solution to converge slowly,and in some cases,diverge altogether.In addressing this issue,this paper introduces a novel approach utilizing a double dueling deep Q-network(D3QN),tailored for dynamic multi-agent environments.A novel reward function based on multi-agent positional constraints is designed,and a training strategy based on incremental learning is performed to achieve collaborative path planning of multiple agents.Moreover,the greedy and Boltzmann probability selection policy is introduced for action selection and avoiding convergence to local extremum.To match radar and image sensors,a convolutional neural network-long short-term memory(CNN-LSTM)architecture is constructed to extract the feature of multi-source measurement as the input of the D3QN.The algorithm’s efficacy and reliability are validated in a simulated environment,utilizing robot operating system and Gazebo.The simulation results show that the proposed algorithm provides a real-time solution for path planning tasks in dynamic scenarios.In terms of the average success rate and accuracy,the proposed method is superior to other deep learning algorithms,and the convergence speed is also improved. 展开更多
关键词 MULTI-AGENT path planning deep reinforcement learning deep q-network
原文传递
Multi-Robot Task Allocation Using Multimodal Multi-Objective Evolutionary Algorithm Based on Deep Reinforcement Learning 被引量:5
11
作者 苗镇华 黄文焘 +1 位作者 张依恋 范勤勤 《Journal of Shanghai Jiaotong university(Science)》 EI 2024年第3期377-387,共11页
The overall performance of multi-robot collaborative systems is significantly affected by the multi-robot task allocation.To improve the effectiveness,robustness,and safety of multi-robot collaborative systems,a multi... The overall performance of multi-robot collaborative systems is significantly affected by the multi-robot task allocation.To improve the effectiveness,robustness,and safety of multi-robot collaborative systems,a multimodal multi-objective evolutionary algorithm based on deep reinforcement learning is proposed in this paper.The improved multimodal multi-objective evolutionary algorithm is used to solve multi-robot task allo-cation problems.Moreover,a deep reinforcement learning strategy is used in the last generation to provide a high-quality path for each assigned robot via an end-to-end manner.Comparisons with three popular multimodal multi-objective evolutionary algorithms on three different scenarios of multi-robot task allocation problems are carried out to verify the performance of the proposed algorithm.The experimental test results show that the proposed algorithm can generate sufficient equivalent schemes to improve the availability and robustness of multi-robot collaborative systems in uncertain environments,and also produce the best scheme to improve the overall task execution efficiency of multi-robot collaborative systems. 展开更多
关键词 multi-robot task allocation multi-robot cooperation path planning multimodal multi-objective evo-lutionary algorithm deep reinforcement learning
原文传递
Deep Learning and Holt-Trend Algorithms for Predicting Covid-19 Pandemic 被引量:3
12
作者 Theyazn H.H.Aldhyani Melfi Alrasheed +3 位作者 Mosleh Hmoud Al-Adaileh Ahmed Abdullah Alqarni Mohammed Y.Alzahrani Ahmed H.Alahmadi 《Computers, Materials & Continua》 SCIE EI 2021年第5期2141-2160,共20页
The Covid-19 epidemic poses a serious public health threat to the world,where people with little or no pre-existing human immunity can be more vulnerable to its effects.Thus,developing surveillance systems for predict... The Covid-19 epidemic poses a serious public health threat to the world,where people with little or no pre-existing human immunity can be more vulnerable to its effects.Thus,developing surveillance systems for predicting the Covid-19 pandemic at an early stage could save millions of lives.In this study,a deep learning algorithm and a Holt-trend model are proposed to predict the coronavirus.The Long-Short Term Memory(LSTM)and Holttrend algorithms were applied to predict confirmed numbers and death cases.The real time data used has been collected from theWorld Health Organization(WHO).In the proposed research,we have considered three countries to test the proposed model,namely Saudi Arabia,Spain and Italy.The results suggest that the LSTM models show better performance in predicting the cases of coronavirus patients.Standard measure performance Mean squared Error(MSE),Root Mean Squared Error(RMSE),Mean error and correlation are employed to estimate the results of the proposed models.The empirical results of the LSTM,using the correlation metrics,are 99.94%,99.94%and 99.91%in predicting the number of confirmed cases in the three countries.As far as the results of the LSTM model in predicting the number of death of Covid-19,they are 99.86%,98.876%and 99.16%with respect to Saudi Arabia,Italy and Spain respectively.Similarly,the experiment’s results of the Holt-Trend model in predicting the number of confirmed cases of Covid-19,using the correlation metrics,are 99.06%,99.96%and 99.94%,whereas the results of the Holt-Trend model in predicting the number of death cases are 99.80%,99.96%and 99.94%with respect to the Saudi Arabia,Italy and Spain respectively.The empirical results indicate the efficient performance of the presented model in predicting the number of confirmed and death cases of Covid-19 in these countries.Such findings provide better insights regarding the future of Covid-19 this pandemic in general.The results were obtained by applying time series models,which need to be considered for the sake of saving the lives of many people. 展开更多
关键词 deep learning algorithm holt-trend prediction Covid-19 machine learning
在线阅读 下载PDF
Sustainable Investment Forecasting of Power Grids Based on theDeep Restricted Boltzmann Machine Optimized by the Lion Algorithm 被引量:4
13
作者 Qian Wang Xiaolong Yang +1 位作者 Di Pu Yingying Fan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2022年第1期269-286,共18页
This paper proposes a new power grid investment prediction model based on the deep restricted Boltzmann machine(DRBM)optimized by the Lion algorithm(LA).Firstly,two factors including transmission and distribution pric... This paper proposes a new power grid investment prediction model based on the deep restricted Boltzmann machine(DRBM)optimized by the Lion algorithm(LA).Firstly,two factors including transmission and distribution price reform(TDPR)and 5G station construction were comprehensively incorporated into the consideration of influencing factors,and the fuzzy threshold method was used to screen out critical influencing factors.Then,the LA was used to optimize the parameters of the DRBM model to improve the model’s prediction accuracy,and the model was trained with the selected influencing factors and investment.Finally,the LA-DRBM model was used to predict the investment of a power grid enterprise,and the final prediction result was obtained by modifying the initial result with the modifying factors.The LA-DRBMmodel compensates for the deficiency of the singlemodel,and greatly improves the investment prediction accuracy of the power grid.In this study,a power grid enterprise was taken as an example to carry out an empirical analysis to prove the validity of the model,and a comparison with the RBM,support vector machine(SVM),back propagation neural network(BPNN),and regression model was conducted to verify the superiority of the model.The conclusion indicates that the proposed model has a strong generalization ability and good robustness,is able to abstract the combination of low-level features into high-level features,and can improve the efficiency of the model’s calculations for investment prediction of power grid enterprises. 展开更多
关键词 Lion algorithm deep restricted boltzmann machine fuzzy threshold method power grid investment forecasting
在线阅读 下载PDF
Histopathological Diagnosis System for Gastritis Using Deep Learning Algorithm 被引量:2
14
作者 Wei Ba Shuhao Wang +3 位作者 Cancheng Liu Yuefeng Wang Huaiyin Shi Zhigang Song 《Chinese Medical Sciences Journal》 CAS CSCD 2021年第3期204-209,共6页
Objective To develope a deep learning algorithm for pathological classification of chronic gastritis and assess its performance using whole-slide images(WSIs).Methods We retrospectively collected 1,250 gastric biopsy ... Objective To develope a deep learning algorithm for pathological classification of chronic gastritis and assess its performance using whole-slide images(WSIs).Methods We retrospectively collected 1,250 gastric biopsy specimens(1,128 gastritis,122 normal mucosa)from PLA General Hospital.The deep learning algorithm based on DeepLab v3(ResNet-50)architecture was trained and validated using 1,008 WSIs and 100 WSIs,respectively.The diagnostic performance of the algorithm was tested on an independent test set of 142 WSIs,with the pathologists’consensus diagnosis as the gold standard.Results The receiver operating characteristic(ROC)curves were generated for chronic superficial gastritis(CSuG),chronic active gastritis(CAcG),and chronic atrophic gastritis(CAtG)in the test set,respectively.The areas under the ROC curves(AUCs)of the algorithm for CSuG,CAcG,and CAtG were 0.882,0.905 and 0.910,respectively.The sensitivity and specificity of the deep learning algorithm for the classification of CSuG,CAcG,and CAtG were 0.790 and 1.000(accuracy 0.880),0.985 and 0.829(accuracy 0.901),0.952 and 0.992(accuracy 0.986),respectively.The overall predicted accuracy for three different types of gastritis was 0.867.By flagging the suspicious regions identified by the algorithm in WSI,a more transparent and interpretable diagnosis can be generated.Conclusion The deep learning algorithm achieved high accuracy for chronic gastritis classification using WSIs.By pre-highlighting the different gastritis regions,it might be used as an auxiliary diagnostic tool to improve the work efficiency of pathologists. 展开更多
关键词 artificial intelligence deep learning algorithm GASTRITIS whole-slide pathological images
在线阅读 下载PDF
Genetic algorithm in seismic waveform inversion and its application in deep seismic sounding data interpretation 被引量:1
15
作者 王夫运 张先康 《Acta Seismologica Sinica(English Edition)》 EI CSCD 2006年第2期163-172,共10页
A genetic algorithm of body waveform inversion is presented for better understanding of crustal and upper mantle structures with deep seismic sounding (DSS) waveform data. General reflection and transmission synthet... A genetic algorithm of body waveform inversion is presented for better understanding of crustal and upper mantle structures with deep seismic sounding (DSS) waveform data. General reflection and transmission synthetic seismogram algorithm, which is capable of calculating the response of thin alternating high and low velocity layers, is applied as a solution for forward modeling, and the genetic algorithm is used to find the optimal solution of the inverse problem. Numerical tests suggest that the method has the capability of resolving low-velocity layers, thin alternating high and low velocity layers, and noise suppression. Waveform inversion using P-wave records from Zeku, Xiahe and Lintao shots in the seismic wide-angle reflection/refraction survey along northeastern Qinghai-Xizang (Tibeteau) Plateau has revealed fine structures of the bottom of the upper crust and alternating layers in the middle/lower crust and topmost upper mantle. 展开更多
关键词 genetic algorithm waveform inversion numerical test deep seismic sounding fine crustal structure
在线阅读 下载PDF
Convolutional Neural Network-Based Deep Q-Network (CNN-DQN) Resource Management in Cloud Radio Access Network 被引量:2
16
作者 Amjad Iqbal Mau-Luen Tham Yoong Choon Chang 《China Communications》 SCIE CSCD 2022年第10期129-142,共14页
The recent surge of mobile subscribers and user data traffic has accelerated the telecommunication sector towards the adoption of the fifth-generation (5G) mobile networks. Cloud radio access network (CRAN) is a promi... The recent surge of mobile subscribers and user data traffic has accelerated the telecommunication sector towards the adoption of the fifth-generation (5G) mobile networks. Cloud radio access network (CRAN) is a prominent framework in the 5G mobile network to meet the above requirements by deploying low-cost and intelligent multiple distributed antennas known as remote radio heads (RRHs). However, achieving the optimal resource allocation (RA) in CRAN using the traditional approach is still challenging due to the complex structure. In this paper, we introduce the convolutional neural network-based deep Q-network (CNN-DQN) to balance the energy consumption and guarantee the user quality of service (QoS) demand in downlink CRAN. We first formulate the Markov decision process (MDP) for energy efficiency (EE) and build up a 3-layer CNN to capture the environment feature as an input state space. We then use DQN to turn on/off the RRHs dynamically based on the user QoS demand and energy consumption in the CRAN. Finally, we solve the RA problem based on the user constraint and transmit power to guarantee the user QoS demand and maximize the EE with a minimum number of active RRHs. In the end, we conduct the simulation to compare our proposed scheme with nature DQN and the traditional approach. 展开更多
关键词 energy efficiency(EE) markov decision process(MDP) convolutional neural network(CNN) cloud RAN deep q-network(DQN)
在线阅读 下载PDF
Manufacturing Resource Scheduling Based on Deep Q-Network 被引量:1
17
作者 ZHANG Yufei Zou Yuanhao ZHAO Xiaodong 《Wuhan University Journal of Natural Sciences》 CAS CSCD 2022年第6期531-538,共8页
To optimize machine allocation and task dispatching in smart manufacturing factories, this paper proposes a manufacturing resource scheduling framework based on reinforcement learning(RL). The framework formulates the... To optimize machine allocation and task dispatching in smart manufacturing factories, this paper proposes a manufacturing resource scheduling framework based on reinforcement learning(RL). The framework formulates the entire scheduling process as a multi-stage sequential decision problem, and further obtains the scheduling order by the combination of deep convolutional neural network(CNN) and improved deep Q-network(DQN). Specifically, with respect to the representation of the Markov decision process(MDP), the feature matrix is considered as the state space and a set of heuristic dispatching rules are denoted as the action space. In addition, the deep CNN is employed to approximate the state-action values, and the double dueling deep Qnetwork with prioritized experience replay and noisy network(D3QPN2) is adopted to determine the appropriate action according to the current state. In the experiments, compared with the traditional heuristic method, the proposed method is able to learn high-quality scheduling policy and achieve shorter makespan on the standard public datasets. 展开更多
关键词 smart manufacturing job shop scheduling convolutional neural network deep q-network
原文传递
DeepSurNet-NSGA II:Deep Surrogate Model-Assisted Multi-Objective Evolutionary Algorithm for Enhancing Leg Linkage in Walking Robots 被引量:1
18
作者 Sayat Ibrayev Batyrkhan Omarov +1 位作者 Arman Ibrayeva Zeinel Momynkulov 《Computers, Materials & Continua》 SCIE EI 2024年第10期229-249,共21页
This research paper presents a comprehensive investigation into the effectiveness of the DeepSurNet-NSGA II(Deep Surrogate Model-Assisted Non-dominated Sorting Genetic Algorithm II)for solving complex multiobjective o... This research paper presents a comprehensive investigation into the effectiveness of the DeepSurNet-NSGA II(Deep Surrogate Model-Assisted Non-dominated Sorting Genetic Algorithm II)for solving complex multiobjective optimization problems,with a particular focus on robotic leg-linkage design.The study introduces an innovative approach that integrates deep learning-based surrogate models with the robust Non-dominated Sorting Genetic Algorithm II,aiming to enhance the efficiency and precision of the optimization process.Through a series of empirical experiments and algorithmic analyses,the paper demonstrates a high degree of correlation between solutions generated by the DeepSurNet-NSGA II and those obtained from direct experimental methods,underscoring the algorithm’s capability to accurately approximate the Pareto-optimal frontier while significantly reducing computational demands.The methodology encompasses a detailed exploration of the algorithm’s configuration,the experimental setup,and the criteria for performance evaluation,ensuring the reproducibility of results and facilitating future advancements in the field.The findings of this study not only confirm the practical applicability and theoretical soundness of the DeepSurNet-NSGA II in navigating the intricacies of multi-objective optimization but also highlight its potential as a transformative tool in engineering and design optimization.By bridging the gap between complex optimization challenges and achievable solutions,this research contributes valuable insights into the optimization domain,offering a promising direction for future inquiries and technological innovations. 展开更多
关键词 Multi-objective optimization genetic algorithm surrogate model deep learning walking robots
在线阅读 下载PDF
Improved Archimedes Optimization Algorithm with Deep Learning Empowered Fall Detection System 被引量:1
19
作者 Ala Saleh Alluhaidan Masoud Alajmi +3 位作者 Fahd N.Al-Wesabi Anwer Mustafa Hilal Manar Ahmed Hamza Abdelwahed Motwakel 《Computers, Materials & Continua》 SCIE EI 2022年第8期2713-2727,共15页
Human fall detection(FD)acts as an important part in creating sensor based alarm system,enabling physical therapists to minimize the effect of fall events and save human lives.Generally,elderly people suffer from seve... Human fall detection(FD)acts as an important part in creating sensor based alarm system,enabling physical therapists to minimize the effect of fall events and save human lives.Generally,elderly people suffer from several diseases,and fall action is a common situation which can occur at any time.In this view,this paper presents an Improved Archimedes Optimization Algorithm with Deep Learning Empowered Fall Detection(IAOA-DLFD)model to identify the fall/non-fall events.The proposed IAOA-DLFD technique comprises different levels of pre-processing to improve the input image quality.Besides,the IAOA with Capsule Network based feature extractor is derived to produce an optimal set of feature vectors.In addition,the IAOA uses to significantly boost the overall FD performance by optimal choice of CapsNet hyperparameters.Lastly,radial basis function(RBF)network is applied for determining the proper class labels of the test images.To showcase the enhanced performance of the IAOA-DLFD technique,a wide range of experiments are executed and the outcomes stated the enhanced detection outcome of the IAOA-DLFD approach over the recent methods with the accuracy of 0.997. 展开更多
关键词 Fall detection intelligent model deep learning archimedes optimization algorithm capsule network
在线阅读 下载PDF
Gradient Optimizer Algorithm with Hybrid Deep Learning Based Failure Detection and Classification in the Industrial Environment 被引量:1
20
作者 Mohamed Zarouan Ibrahim M.Mehedi +1 位作者 Shaikh Abdul Latif Md.Masud Rana 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第2期1341-1364,共24页
Failure detection is an essential task in industrial systems for preventing costly downtime and ensuring the seamlessoperation of the system. Current industrial processes are getting smarter with the emergence of Indu... Failure detection is an essential task in industrial systems for preventing costly downtime and ensuring the seamlessoperation of the system. Current industrial processes are getting smarter with the emergence of Industry 4.0.Specifically, various modernized industrial processes have been equipped with quite a few sensors to collectprocess-based data to find faults arising or prevailing in processes along with monitoring the status of processes.Fault diagnosis of rotating machines serves a main role in the engineering field and industrial production. Dueto the disadvantages of existing fault, diagnosis approaches, which greatly depend on professional experienceand human knowledge, intellectual fault diagnosis based on deep learning (DL) has attracted the researcher’sinterest. DL reaches the desired fault classification and automatic feature learning. Therefore, this article designs a Gradient Optimizer Algorithm with Hybrid Deep Learning-based Failure Detection and Classification (GOAHDLFDC)in the industrial environment. The presented GOAHDL-FDC technique initially applies continuous wavelettransform (CWT) for preprocessing the actual vibrational signals of the rotating machinery. Next, the residualnetwork (ResNet18) model was exploited for the extraction of features from the vibration signals which are thenfed into theHDLmodel for automated fault detection. Finally, theGOA-based hyperparameter tuning is performedtoadjust the parameter valuesof theHDLmodel accurately.The experimental result analysis of the GOAHDL-FD Calgorithm takes place using a series of simulations and the experimentation outcomes highlight the better resultsof the GOAHDL-FDC technique under different aspects. 展开更多
关键词 Fault detection Industry 4.0 gradient optimizer algorithm deep learning rotating machineries artificial intelligence
在线阅读 下载PDF
上一页 1 2 180 下一页 到第
使用帮助 返回顶部