期刊文献+
共找到2,579篇文章
< 1 2 129 >
每页显示 20 50 100
Rapid pathologic grading-based diagnosis of esophageal squamous cell carcinoma via Raman spectroscopy and a deep learning algorithm 被引量:1
1
作者 Xin-Ying Yu Jian Chen +2 位作者 Lian-Yu Li Feng-En Chen Qiang He 《World Journal of Gastroenterology》 2025年第14期32-46,共15页
BACKGROUND Esophageal squamous cell carcinoma is a major histological subtype of esophageal cancer.Many molecular genetic changes are associated with its occurrence.Raman spectroscopy has become a new method for the e... BACKGROUND Esophageal squamous cell carcinoma is a major histological subtype of esophageal cancer.Many molecular genetic changes are associated with its occurrence.Raman spectroscopy has become a new method for the early diagnosis of tumors because it can reflect the structures of substances and their changes at the molecular level.AIM To detect alterations in Raman spectral information across different stages of esophageal neoplasia.METHODS Different grades of esophageal lesions were collected,and a total of 360 groups of Raman spectrum data were collected.A 1D-transformer network model was proposed to handle the task of classifying the spectral data of esophageal squamous cell carcinoma.In addition,a deep learning model was applied to visualize the Raman spectral data and interpret their molecular characteristics.RESULTS A comparison among Raman spectral data with different pathological grades and a visual analysis revealed that the Raman peaks with significant differences were concentrated mainly at 1095 cm^(-1)(DNA,symmetric PO,and stretching vibration),1132 cm^(-1)(cytochrome c),1171 cm^(-1)(acetoacetate),1216 cm^(-1)(amide III),and 1315 cm^(-1)(glycerol).A comparison among the training results of different models revealed that the 1Dtransformer network performed best.A 93.30%accuracy value,a 96.65%specificity value,a 93.30%sensitivity value,and a 93.17%F1 score were achieved.CONCLUSION Raman spectroscopy revealed significantly different waveforms for the different stages of esophageal neoplasia.The combination of Raman spectroscopy and deep learning methods could significantly improve the accuracy of classification. 展开更多
关键词 Raman spectroscopy Esophageal neoplasia Early diagnosis deep learning algorithm Rapid pathologic grading
暂未订购
Algorithmic crypto trading using information‑driven bars,triple barrier labeling and deep learning
2
作者 Przemysław Grądzki Piotr Wojcik Stefan Lessmann 《Financial Innovation》 2025年第1期3979-4021,共43页
This paper investigates the optimization of data sampling and target labeling techniques to enhance algorithmic trading strategies in cryptocurrency markets,focusing on Bitcoin(BTC)and Ethereum(ETH).Traditional data s... This paper investigates the optimization of data sampling and target labeling techniques to enhance algorithmic trading strategies in cryptocurrency markets,focusing on Bitcoin(BTC)and Ethereum(ETH).Traditional data sampling methods,such as time bars,often fail to capture the nuances of the continuously active and highly volatile cryptocurrency market and force traders to wait for arbitrary points in time.To address this,we propose an alternative approach using information-driven sampling methods,including the CUSUM filter,range bars,volume bars,and dollar bars,and evaluate their performance using tick-level data from January 2018 to June 2023.Additionally,we introduce the Triple Barrier method for target labeling,which offers a solution tailored for algorithmic trading as opposed to the widely used next-bar prediction.We empirically assess the effectiveness of these data sampling and labeling methods to craft profitable trading strategies.The results demonstrate that the innovative combination of CUSUM-filtered data with Triple Barrier labeling outperforms traditional time bars and next-bar prediction,achieving consistently positive trading performance even after accounting for transaction costs.Moreover,our system enables making trading decisions at any point in time on the basis of market conditions,providing an advantage over traditional methods that rely on fixed time intervals.Furthermore,the paper contributes to the ongoing debate on the applicability of Transformer models to time series classification in the context of algorithmic trading by evaluating various Transformer architectures—including the vanilla Transformer encoder,FEDformer,and Autoformer—alongside other deep learning architectures and classical machine learning models,revealing insights into their relative performance. 展开更多
关键词 Cryptocurrencies algorithmic trading deep learning Information-driven bars Triple barrier method
在线阅读 下载PDF
Deep Learning Mixed Hyper-Parameter Optimization Based on Improved Cuckoo Search Algorithm
3
作者 TONG Yu CHEN Rong HU Biling 《Wuhan University Journal of Natural Sciences》 2025年第2期195-204,共10页
Deep learning algorithm is an effective data mining method and has been used in many fields to solve practical problems.However,the deep learning algorithms often contain some hyper-parameters which may be continuous,... Deep learning algorithm is an effective data mining method and has been used in many fields to solve practical problems.However,the deep learning algorithms often contain some hyper-parameters which may be continuous,integer,or mixed,and are often given based on experience but largely affect the effectiveness of activity recognition.In order to adapt to different hyper-parameter optimization problems,our improved Cuckoo Search(CS)algorithm is proposed to optimize the mixed hyper-parameters in deep learning algorithm.The algorithm optimizes the hyper-parameters in the deep learning model robustly,and intelligently selects the combination of integer type and continuous hyper-parameters that make the model optimal.Then,the mixed hyper-parameter in Convolutional Neural Network(CNN),Long-Short-Term Memory(LSTM)and CNN-LSTM are optimized based on the methodology on the smart home activity recognition datasets.Results show that the methodology can improve the performance of the deep learning model and whether we are experienced or not,we can get a better deep learning model using our method. 展开更多
关键词 improved Cuckoo Search algorithm mixed hyper-parameter OPTIMIZATION deep learning
原文传递
Enhanced Multimodal Physiological Signal Analysis for Pain Assessment Using Optimized Ensemble Deep Learning
4
作者 Karim Gasmi Olfa Hrizi +8 位作者 Najib Ben Aoun Ibrahim Alrashdi Ali Alqazzaz Omer Hamid Mohamed O.Altaieb Alameen E.M.Abdalrahman Lassaad Ben Ammar Manel Mrabet Omrane Necibi 《Computer Modeling in Engineering & Sciences》 2025年第5期2459-2489,共31页
The potential applications of multimodal physiological signals in healthcare,pain monitoring,and clinical decision support systems have garnered significant attention in biomedical research.Subjective self-reporting i... The potential applications of multimodal physiological signals in healthcare,pain monitoring,and clinical decision support systems have garnered significant attention in biomedical research.Subjective self-reporting is the foundation of conventional pain assessment methods,which may be unreliable.Deep learning is a promising alternative to resolve this limitation through automated pain classification.This paper proposes an ensemble deep-learning framework for pain assessment.The framework makes use of features collected from electromyography(EMG),skin conductance level(SCL),and electrocardiography(ECG)signals.We integrate Convolutional Neural Networks(CNN),Long Short-Term Memory Networks(LSTM),Bidirectional Gated Recurrent Units(BiGRU),and Deep Neural Networks(DNN)models.We then aggregate their predictions using a weighted averaging ensemble technique to increase the classification’s robustness.To improve computing efficiency and remove redundant features,we use Particle Swarm Optimization(PSO)for feature selection.This enables us to reduce the features’dimensionality without sacrificing the classification’s accuracy.With improved accuracy,precision,recall,and F1-score across all pain levels,the experimental results show that the suggested ensemble model performs better than individual deep learning classifiers.In our experiments,the suggested model achieved over 98%accuracy,suggesting promising automated pain assessment performance.However,due to differences in validation protocols,comparisons with previous studies are still limited.Combining deep learning and feature selection techniques significantly improves model generalization,reducing overfitting and enhancing classification performance.The evaluation was conducted using the BioVid Heat Pain Dataset,confirming the model’s effectiveness in distinguishing between different pain intensity levels. 展开更多
关键词 Pain assessment ensemble learning deep learning optimal algorithm feature selection
在线阅读 下载PDF
Heart Disease Prediction Model Using Feature Selection and Ensemble Deep Learning with Optimized Weight
5
作者 Iman S.Al-Mahdi Saad M.Darwish Magda M.Madbouly 《Computer Modeling in Engineering & Sciences》 2025年第4期875-909,共35页
Heart disease prediction is a critical issue in healthcare,where accurate early diagnosis can save lives and reduce healthcare costs.The problem is inherently complex due to the high dimensionality of medical data,irr... Heart disease prediction is a critical issue in healthcare,where accurate early diagnosis can save lives and reduce healthcare costs.The problem is inherently complex due to the high dimensionality of medical data,irrelevant or redundant features,and the variability in risk factors such as age,lifestyle,andmedical history.These challenges often lead to inefficient and less accuratemodels.Traditional predictionmethodologies face limitations in effectively handling large feature sets and optimizing classification performance,which can result in overfitting poor generalization,and high computational cost.This work proposes a novel classification model for heart disease prediction that addresses these challenges by integrating feature selection through a Genetic Algorithm(GA)with an ensemble deep learning approach optimized using the Tunicate Swarm Algorithm(TSA).GA selects the most relevant features,reducing dimensionality and improvingmodel efficiency.Theselected features are then used to train an ensemble of deep learning models,where the TSA optimizes the weight of each model in the ensemble to enhance prediction accuracy.This hybrid approach addresses key challenges in the field,such as high dimensionality,redundant features,and classification performance,by introducing an efficient feature selection mechanism and optimizing the weighting of deep learning models in the ensemble.These enhancements result in a model that achieves superior accuracy,generalization,and efficiency compared to traditional methods.The proposed model demonstrated notable advancements in both prediction accuracy and computational efficiency over traditionalmodels.Specifically,it achieved an accuracy of 97.5%,a sensitivity of 97.2%,and a specificity of 97.8%.Additionally,with a 60-40 data split and 5-fold cross-validation,the model showed a significant reduction in training time(90 s),memory consumption(950 MB),and CPU usage(80%),highlighting its effectiveness in processing large,complex medical datasets for heart disease prediction. 展开更多
关键词 Heart disease prediction feature selection ensemble deep learning optimization genetic algorithm(GA) ensemble deep learning tunicate swarm algorithm(TSA) feature selection
在线阅读 下载PDF
A Fast Automatic Road Crack Segmentation Method Based on Deep Learning with Model Compression Framework
6
作者 Minggang Xu Chong Li +4 位作者 Xiangli Kong Yuming Wu Zhixiang Lu Jionglong Su Zhun Fan 《Journal of Beijing Institute of Technology》 2025年第4期388-404,共17页
Computer-vision and deep-learning techniques are widely applied to detect,monitor,and assess pavement conditions including road crack detection.Traditional methods fail to achieve satisfactory accuracy and generalizat... Computer-vision and deep-learning techniques are widely applied to detect,monitor,and assess pavement conditions including road crack detection.Traditional methods fail to achieve satisfactory accuracy and generalization performance in for crack detection.Complex network model can generate redundant feature maps and computational complexity.Therefore,this paper proposes a novel model compression framework based on deep learning to detect road cracks,which can improve the detection efficiency and accuracy.A distillation loss function is proposed to compress the teacher model,followed by channel pruning.Meanwhile,a multi-dilation model is proposed to improve the accuracy of the model pruned.The proposed method is tested on the public database CrackForest dataset(CFD).The experimental results show that the proposed method is more efficient and accurate than other state-of-art methods. 展开更多
关键词 automatic road crack detection deep learning u-net DISTILLATION channel pruning multi-dilation model
在线阅读 下载PDF
Combining deep reinforcement learning with heuristics to solve the traveling salesman problem
7
作者 Li Hong Yu Liu +1 位作者 Mengqiao Xu Wenhui Deng 《Chinese Physics B》 2025年第1期96-106,共11页
Recent studies employing deep learning to solve the traveling salesman problem(TSP)have mainly focused on learning construction heuristics.Such methods can improve TSP solutions,but still depend on additional programs... Recent studies employing deep learning to solve the traveling salesman problem(TSP)have mainly focused on learning construction heuristics.Such methods can improve TSP solutions,but still depend on additional programs.However,methods that focus on learning improvement heuristics to iteratively refine solutions remain insufficient.Traditional improvement heuristics are guided by a manually designed search strategy and may only achieve limited improvements.This paper proposes a novel framework for learning improvement heuristics,which automatically discovers better improvement policies for heuristics to iteratively solve the TSP.Our framework first designs a new architecture based on a transformer model to make the policy network parameterized,which introduces an action-dropout layer to prevent action selection from overfitting.It then proposes a deep reinforcement learning approach integrating a simulated annealing mechanism(named RL-SA)to learn the pairwise selected policy,aiming to improve the 2-opt algorithm's performance.The RL-SA leverages the whale optimization algorithm to generate initial solutions for better sampling efficiency and uses the Gaussian perturbation strategy to tackle the sparse reward problem of reinforcement learning.The experiment results show that the proposed approach is significantly superior to the state-of-the-art learning-based methods,and further reduces the gap between learning-based methods and highly optimized solvers in the benchmark datasets.Moreover,our pre-trained model M can be applied to guide the SA algorithm(named M-SA(ours)),which performs better than existing deep models in small-,medium-,and large-scale TSPLIB datasets.Additionally,the M-SA(ours)achieves excellent generalization performance in a real-world dataset on global liner shipping routes,with the optimization percentages in distance reduction ranging from3.52%to 17.99%. 展开更多
关键词 traveling salesman problem deep reinforcement learning simulated annealing algorithm transformer model whale optimization algorithm
原文传递
The Blockchain Neural Network Superior to Deep Learning for Improving the Trust of Supply Chain
8
作者 Hsiao-Chun Han Der-Chen Huang 《Computer Modeling in Engineering & Sciences》 2025年第6期3921-3941,共21页
With the increasing importance of supply chain transparency,blockchain-based data has emerged as a valuable and verifiable source for analyzing procurement transaction risks.This study extends the mathematical model a... With the increasing importance of supply chain transparency,blockchain-based data has emerged as a valuable and verifiable source for analyzing procurement transaction risks.This study extends the mathematical model and proof of‘the Overall Performance Characteristics of the Supply Chain’to encompass multiple variables within blockchain data.Utilizing graph theory,the model is further developed into a single-layer neural network,which serves as the foundation for constructing two multi-layer deep learning neural network models,Feedforward Neural Network(abbreviated as FNN)and Deep Clustering Network(abbreviated as DCN).Furthermore,this study retrieves corporate data from the Chunghwa Yellow Pages online resource and Taiwan Economic Journal database(abbreviated as TEJ).These data are then virtualized using‘the Metaverse Algorithm’,and the selected virtualized blockchain variables are utilized to train a neural network model for classification.The results demonstrate that a single-layer neural network model,leveraging blockchain data and employing the Proof of Relation algorithm(abbreviated as PoR)as the activation function,effectively identifies anomalous enterprises,which constitute 7.2%of the total sample,aligning with expectations.In contrast,the multi-layer neural network models,DCN and FNN,classify an excessively large proportion of enterprises as anomalous(ranging from one-fourth to one-third),which deviates from expectations.This indicates that deep learning may still be inadequate in effectively capturing or identifying malicious corporate behaviors associated with distortions in procurement transaction data.In other words,procurement transaction blockchain data possesses intrinsic value that cannot be replaced by artificial intelligence(abbreviated as AI). 展开更多
关键词 Blockchain neural network deep learning consensus algorithm supply chain management information security management
在线阅读 下载PDF
Fire Hawk Optimization-Enabled Deep Learning Scheme Based Hybrid Cloud Container Architecture for Migrating Interoperability Based Application
9
作者 G Indumathi R Sarala 《China Communications》 2025年第5期285-304,共20页
Virtualization is an indispensable part of the cloud for the objective of deploying different virtual servers over the same physical layer.However,the increase in the number of applications executing on the repositori... Virtualization is an indispensable part of the cloud for the objective of deploying different virtual servers over the same physical layer.However,the increase in the number of applications executing on the repositories results in increased overload due to the adoption of cloud services.Moreover,the migration of applications on the cloud with optimized resource allocation is a herculean task even though it is employed for minimizing the dilemma of allocating resources.In this paper,a Fire Hawk Optimization enabled Deep Learning Scheme(FHOEDLS)is proposed for minimizing the overload and optimizing the resource allocation on the hybrid cloud container architecture for migrating interoperability based applications This FHOEDLS achieves the load prediction through the utilization of deep CNN-GRU-AM model for attaining resource allocation and better migration of applications.It specifically adopted the Fire Hawk Optimization Algorithm(FHOA)for optimizing the parameters that influence the factors that aid in better interoperable application migration with improved resource allocation and minimized overhead.It considered the factors of resource capacity,transmission cost,demand,and predicted load into account during the formulation of the objective function utilized for resource allocation and application migration.The cloud simulation of this FHOEDLS is achieved using a container,Virtual Machine(VM),and Physical Machine(PM).The results of this proposed FHOEDLS confirmed a better resource capability of 0.418 and a minimized load of 0.0061. 展开更多
关键词 CONTAINER deep learning fire hawk optimization algorithm hybrid cloud interoperable application migration
在线阅读 下载PDF
A review of deep learning-based analyses of impact crater detection on different celestial bodies
10
作者 Xu Zhang Jialong Lai +2 位作者 Feifei Cui Chunyu Ding Zhicheng Zhong 《Astronomical Techniques and Instruments》 2025年第3期127-147,共21页
Planetary surfaces,shaped by billions of years of geologic evolution,display numerous impact craters whose distribution of size,density,and spatial arrangement reveals the celestial body's history.Identifying thes... Planetary surfaces,shaped by billions of years of geologic evolution,display numerous impact craters whose distribution of size,density,and spatial arrangement reveals the celestial body's history.Identifying these craters is essential for planetary science and is currently mainly achieved with deep learning-driven detection algorithms.However,because impact crater characteristics are substantially affected by the geologic environment,surface materials,and atmospheric conditions,the performance of deep learning models can be inconsistent between celestial bodies.In this paper,we first examine how the surface characteristics of the Moon,Mars,and Earth,along with the differences in their impact crater features,affect model performance.Then,we compare crater detection across celestial bodies by analyzing enhanced convolutional neural networks and U-shaped Convolutional Neural Network-based models to highlight how geology,data,and model design affect accuracy and generalization.Finally,we address current deep learning challenges,suggest directions for model improvement,such as multimodal data fusion and cross-planet learning and list available impact crater databases.This review can provide necessary technical support for deep space exploration and planetary science,as well as new ideas and directions for future research on automatic detection of impact craters on celestial body surfaces and on planetary geology. 展开更多
关键词 Crater detection algorithms deep learning Different celestial bodies Impact crater databases
在线阅读 下载PDF
Advanced ECG Signal Analysis for Cardiovascular Disease Diagnosis Using AVOA Optimized Ensembled Deep Transfer Learning Approaches
11
作者 Amrutanshu Panigrahi Abhilash Pati +5 位作者 Bibhuprasad Sahu Ashis Kumar Pati Subrata Chowdhury Khursheed Aurangzeb Nadeem Javaid Sheraz Aslam 《Computers, Materials & Continua》 2025年第7期1633-1657,共25页
The integration of IoT and Deep Learning(DL)has significantly advanced real-time health monitoring and predictive maintenance in prognostic and health management(PHM).Electrocardiograms(ECGs)are widely used for cardio... The integration of IoT and Deep Learning(DL)has significantly advanced real-time health monitoring and predictive maintenance in prognostic and health management(PHM).Electrocardiograms(ECGs)are widely used for cardiovascular disease(CVD)diagnosis,but fluctuating signal patterns make classification challenging.Computer-assisted automated diagnostic tools that enhance ECG signal categorization using sophisticated algorithms and machine learning are helping healthcare practitioners manage greater patient populations.With this motivation,the study proposes a DL framework leveraging the PTB-XL ECG dataset to improve CVD diagnosis.Deep Transfer Learning(DTL)techniques extract features,followed by feature fusion to eliminate redundancy and retain the most informative features.Utilizing the African Vulture Optimization Algorithm(AVOA)for feature selection is more effective than the standard methods,as it offers an ideal balance between exploration and exploitation that results in an optimal set of features,improving classification performance while reducing redundancy.Various machine learning classifiers,including Support Vector Machine(SVM),eXtreme Gradient Boosting(XGBoost),Adaptive Boosting(AdaBoost),and Extreme Learning Machine(ELM),are used for further classification.Additionally,an ensemble model is developed to further improve accuracy.Experimental results demonstrate that the proposed model achieves the highest accuracy of 96.31%,highlighting its effectiveness in enhancing CVD diagnosis. 展开更多
关键词 Prognostics and health management(PHM) cardiovascular disease(CVD) electrocardiograms(ECGs) deep transfer learning(DTL) African vulture optimization algorithm(AVOA)
在线阅读 下载PDF
ISAR autofocus imaging algorithm for maneuvering targets based on deep learning and keystone transform 被引量:5
12
作者 SHI Hongyin LIU Yue +1 位作者 GUO Jianwen LIU Mingxin 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2020年第6期1178-1185,共8页
The issue of small-angle maneuvering targets inverse synthetic aperture radar(ISAR)imaging has been successfully addressed by popular motion compensation algorithms.However,when the target’s rotational velocity is su... The issue of small-angle maneuvering targets inverse synthetic aperture radar(ISAR)imaging has been successfully addressed by popular motion compensation algorithms.However,when the target’s rotational velocity is sufficiently high during the dwell time of the radar,such compensation algorithms cannot obtain a high quality image.This paper proposes an ISAR imaging algorithm based on keystone transform and deep learning algorithm.The keystone transform is used to coarsely compensate for the target’s rotational motion and translational motion,and the deep learning algorithm is used to achieve a super-resolution image.The uniformly distributed point target data are used as the data set of the training u-net network.In addition,this method does not require estimating the motion parameters of the target,which simplifies the algorithm steps.Finally,several experiments are performed to demonstrate the effectiveness of the proposed algorithm. 展开更多
关键词 inverse synthetic aperture radar(ISAR) maneuvering target keystone transform deep learning u-net network
在线阅读 下载PDF
Multi-Robot Task Allocation Using Multimodal Multi-Objective Evolutionary Algorithm Based on Deep Reinforcement Learning 被引量:5
13
作者 苗镇华 黄文焘 +1 位作者 张依恋 范勤勤 《Journal of Shanghai Jiaotong university(Science)》 EI 2024年第3期377-387,共11页
The overall performance of multi-robot collaborative systems is significantly affected by the multi-robot task allocation.To improve the effectiveness,robustness,and safety of multi-robot collaborative systems,a multi... The overall performance of multi-robot collaborative systems is significantly affected by the multi-robot task allocation.To improve the effectiveness,robustness,and safety of multi-robot collaborative systems,a multimodal multi-objective evolutionary algorithm based on deep reinforcement learning is proposed in this paper.The improved multimodal multi-objective evolutionary algorithm is used to solve multi-robot task allo-cation problems.Moreover,a deep reinforcement learning strategy is used in the last generation to provide a high-quality path for each assigned robot via an end-to-end manner.Comparisons with three popular multimodal multi-objective evolutionary algorithms on three different scenarios of multi-robot task allocation problems are carried out to verify the performance of the proposed algorithm.The experimental test results show that the proposed algorithm can generate sufficient equivalent schemes to improve the availability and robustness of multi-robot collaborative systems in uncertain environments,and also produce the best scheme to improve the overall task execution efficiency of multi-robot collaborative systems. 展开更多
关键词 multi-robot task allocation multi-robot cooperation path planning multimodal multi-objective evo-lutionary algorithm deep reinforcement learning
原文传递
Optimizing Deep Learning Parameters Using Genetic Algorithm for Object Recognition and Robot Grasping 被引量:2
14
作者 Delowar Hossain Genci Capi Mitsuru Jindai 《Journal of Electronic Science and Technology》 CAS CSCD 2018年第1期11-15,共5页
The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We... The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We propose a genetic algorithm(GA) based deep belief neural network(DBNN) method for robot object recognition and grasping purpose. This method optimizes the parameters of the DBNN method, such as the number of hidden units, the number of epochs, and the learning rates, which would reduce the error rate and the network training time of object recognition. After recognizing objects, the robot performs the pick-andplace operations. We build a database of six objects for experimental purpose. Experimental results demonstrate that our method outperforms on the optimized robot object recognition and grasping tasks. 展开更多
关键词 deep learning(DL) deep belief neural network(DBNN) genetic algorithm(GA) object recognition robot grasping
在线阅读 下载PDF
Deep Learning and Holt-Trend Algorithms for Predicting Covid-19 Pandemic 被引量:3
15
作者 Theyazn H.H.Aldhyani Melfi Alrasheed +3 位作者 Mosleh Hmoud Al-Adaileh Ahmed Abdullah Alqarni Mohammed Y.Alzahrani Ahmed H.Alahmadi 《Computers, Materials & Continua》 SCIE EI 2021年第5期2141-2160,共20页
The Covid-19 epidemic poses a serious public health threat to the world,where people with little or no pre-existing human immunity can be more vulnerable to its effects.Thus,developing surveillance systems for predict... The Covid-19 epidemic poses a serious public health threat to the world,where people with little or no pre-existing human immunity can be more vulnerable to its effects.Thus,developing surveillance systems for predicting the Covid-19 pandemic at an early stage could save millions of lives.In this study,a deep learning algorithm and a Holt-trend model are proposed to predict the coronavirus.The Long-Short Term Memory(LSTM)and Holttrend algorithms were applied to predict confirmed numbers and death cases.The real time data used has been collected from theWorld Health Organization(WHO).In the proposed research,we have considered three countries to test the proposed model,namely Saudi Arabia,Spain and Italy.The results suggest that the LSTM models show better performance in predicting the cases of coronavirus patients.Standard measure performance Mean squared Error(MSE),Root Mean Squared Error(RMSE),Mean error and correlation are employed to estimate the results of the proposed models.The empirical results of the LSTM,using the correlation metrics,are 99.94%,99.94%and 99.91%in predicting the number of confirmed cases in the three countries.As far as the results of the LSTM model in predicting the number of death of Covid-19,they are 99.86%,98.876%and 99.16%with respect to Saudi Arabia,Italy and Spain respectively.Similarly,the experiment’s results of the Holt-Trend model in predicting the number of confirmed cases of Covid-19,using the correlation metrics,are 99.06%,99.96%and 99.94%,whereas the results of the Holt-Trend model in predicting the number of death cases are 99.80%,99.96%and 99.94%with respect to the Saudi Arabia,Italy and Spain respectively.The empirical results indicate the efficient performance of the presented model in predicting the number of confirmed and death cases of Covid-19 in these countries.Such findings provide better insights regarding the future of Covid-19 this pandemic in general.The results were obtained by applying time series models,which need to be considered for the sake of saving the lives of many people. 展开更多
关键词 deep learning algorithm holt-trend prediction Covid-19 machine learning
在线阅读 下载PDF
Histopathological Diagnosis System for Gastritis Using Deep Learning Algorithm 被引量:2
16
作者 Wei Ba Shuhao Wang +3 位作者 Cancheng Liu Yuefeng Wang Huaiyin Shi Zhigang Song 《Chinese Medical Sciences Journal》 CAS CSCD 2021年第3期204-209,共6页
Objective To develope a deep learning algorithm for pathological classification of chronic gastritis and assess its performance using whole-slide images(WSIs).Methods We retrospectively collected 1,250 gastric biopsy ... Objective To develope a deep learning algorithm for pathological classification of chronic gastritis and assess its performance using whole-slide images(WSIs).Methods We retrospectively collected 1,250 gastric biopsy specimens(1,128 gastritis,122 normal mucosa)from PLA General Hospital.The deep learning algorithm based on DeepLab v3(ResNet-50)architecture was trained and validated using 1,008 WSIs and 100 WSIs,respectively.The diagnostic performance of the algorithm was tested on an independent test set of 142 WSIs,with the pathologists’consensus diagnosis as the gold standard.Results The receiver operating characteristic(ROC)curves were generated for chronic superficial gastritis(CSuG),chronic active gastritis(CAcG),and chronic atrophic gastritis(CAtG)in the test set,respectively.The areas under the ROC curves(AUCs)of the algorithm for CSuG,CAcG,and CAtG were 0.882,0.905 and 0.910,respectively.The sensitivity and specificity of the deep learning algorithm for the classification of CSuG,CAcG,and CAtG were 0.790 and 1.000(accuracy 0.880),0.985 and 0.829(accuracy 0.901),0.952 and 0.992(accuracy 0.986),respectively.The overall predicted accuracy for three different types of gastritis was 0.867.By flagging the suspicious regions identified by the algorithm in WSI,a more transparent and interpretable diagnosis can be generated.Conclusion The deep learning algorithm achieved high accuracy for chronic gastritis classification using WSIs.By pre-highlighting the different gastritis regions,it might be used as an auxiliary diagnostic tool to improve the work efficiency of pathologists. 展开更多
关键词 artificial intelligence deep learning algorithm GASTRITIS whole-slide pathological images
在线阅读 下载PDF
Gradient Optimizer Algorithm with Hybrid Deep Learning Based Failure Detection and Classification in the Industrial Environment 被引量:1
17
作者 Mohamed Zarouan Ibrahim M.Mehedi +1 位作者 Shaikh Abdul Latif Md.Masud Rana 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第2期1341-1364,共24页
Failure detection is an essential task in industrial systems for preventing costly downtime and ensuring the seamlessoperation of the system. Current industrial processes are getting smarter with the emergence of Indu... Failure detection is an essential task in industrial systems for preventing costly downtime and ensuring the seamlessoperation of the system. Current industrial processes are getting smarter with the emergence of Industry 4.0.Specifically, various modernized industrial processes have been equipped with quite a few sensors to collectprocess-based data to find faults arising or prevailing in processes along with monitoring the status of processes.Fault diagnosis of rotating machines serves a main role in the engineering field and industrial production. Dueto the disadvantages of existing fault, diagnosis approaches, which greatly depend on professional experienceand human knowledge, intellectual fault diagnosis based on deep learning (DL) has attracted the researcher’sinterest. DL reaches the desired fault classification and automatic feature learning. Therefore, this article designs a Gradient Optimizer Algorithm with Hybrid Deep Learning-based Failure Detection and Classification (GOAHDLFDC)in the industrial environment. The presented GOAHDL-FDC technique initially applies continuous wavelettransform (CWT) for preprocessing the actual vibrational signals of the rotating machinery. Next, the residualnetwork (ResNet18) model was exploited for the extraction of features from the vibration signals which are thenfed into theHDLmodel for automated fault detection. Finally, theGOA-based hyperparameter tuning is performedtoadjust the parameter valuesof theHDLmodel accurately.The experimental result analysis of the GOAHDL-FD Calgorithm takes place using a series of simulations and the experimentation outcomes highlight the better resultsof the GOAHDL-FDC technique under different aspects. 展开更多
关键词 Fault detection Industry 4.0 gradient optimizer algorithm deep learning rotating machineries artificial intelligence
在线阅读 下载PDF
Improved Archimedes Optimization Algorithm with Deep Learning Empowered Fall Detection System 被引量:1
18
作者 Ala Saleh Alluhaidan Masoud Alajmi +3 位作者 Fahd N.Al-Wesabi Anwer Mustafa Hilal Manar Ahmed Hamza Abdelwahed Motwakel 《Computers, Materials & Continua》 SCIE EI 2022年第8期2713-2727,共15页
Human fall detection(FD)acts as an important part in creating sensor based alarm system,enabling physical therapists to minimize the effect of fall events and save human lives.Generally,elderly people suffer from seve... Human fall detection(FD)acts as an important part in creating sensor based alarm system,enabling physical therapists to minimize the effect of fall events and save human lives.Generally,elderly people suffer from several diseases,and fall action is a common situation which can occur at any time.In this view,this paper presents an Improved Archimedes Optimization Algorithm with Deep Learning Empowered Fall Detection(IAOA-DLFD)model to identify the fall/non-fall events.The proposed IAOA-DLFD technique comprises different levels of pre-processing to improve the input image quality.Besides,the IAOA with Capsule Network based feature extractor is derived to produce an optimal set of feature vectors.In addition,the IAOA uses to significantly boost the overall FD performance by optimal choice of CapsNet hyperparameters.Lastly,radial basis function(RBF)network is applied for determining the proper class labels of the test images.To showcase the enhanced performance of the IAOA-DLFD technique,a wide range of experiments are executed and the outcomes stated the enhanced detection outcome of the IAOA-DLFD approach over the recent methods with the accuracy of 0.997. 展开更多
关键词 Fall detection intelligent model deep learning archimedes optimization algorithm capsule network
在线阅读 下载PDF
Power System Resiliency and Wide Area Control Employing Deep Learning Algorithm 被引量:1
19
作者 Pandia Rajan Jeyaraj Aravind Chellachi Kathiresan +3 位作者 Siva Prakash Asokan Edward Rajan Samuel Nadar Hegazy Rezk Thanikanti Sudhakar Babu 《Computers, Materials & Continua》 SCIE EI 2021年第7期553-567,共15页
The power transfer capability of the smart transmission gridconnected networks needs to be reduced by inter-area oscillations.Due to the fact that inter-area modes of oscillations detain and make instability of power ... The power transfer capability of the smart transmission gridconnected networks needs to be reduced by inter-area oscillations.Due to the fact that inter-area modes of oscillations detain and make instability of power transmission networks.This fact is more noticeable in smart grid-connected systems.The smart grid infrastructure has more renewable energy resources installed for its operation.To overcome this problem,a deep learning widearea controller is proposed for real-time parameter control and smart power grid resilience on oscillations inter-area modes.The proposed Deep Wide Area Controller(DWAC)uses the Deep Belief Network(DBN).The network weights are updated based on real-time data from Phasor measurement units.Resilience assessment based on failure probability,financial impact,and time-series data in grid failure management determine the norm H2.To demonstrate the effectiveness of the proposed framework,a time-domain simulation case study based on the IEEE-39 bus system was performed.For a one-channel attack on the test system,the resiliency index increased to 0.962,and inter-area dampingξwas reduced to 0.005.The obtained results validate the proposed deep learning algorithm’s efficiency on damping inter-area and local oscillation on the 2-channel attack as well.Results also offer robust management of power system resilience and timely control of the operating conditions. 展开更多
关键词 Neural network deep learning algorithm low-frequency oscillation resiliency assessment smart grid wide-area control
在线阅读 下载PDF
A dynamic fusion path planning algorithm for mobile robots incorporating improved IB-RRT∗and deep reinforcement learning 被引量:1
20
作者 刘安东 ZHANG Baixin +2 位作者 CUI Qi ZHANG Dan NI Hongjie 《High Technology Letters》 EI CAS 2023年第4期365-376,共12页
Dynamic path planning is crucial for mobile robots to navigate successfully in unstructured envi-ronments.To achieve globally optimal path and real-time dynamic obstacle avoidance during the movement,a dynamic path pl... Dynamic path planning is crucial for mobile robots to navigate successfully in unstructured envi-ronments.To achieve globally optimal path and real-time dynamic obstacle avoidance during the movement,a dynamic path planning algorithm incorporating improved IB-RRT∗and deep reinforce-ment learning(DRL)is proposed.Firstly,an improved IB-RRT∗algorithm is proposed for global path planning by combining double elliptic subset sampling and probabilistic central circle target bi-as.Then,to tackle the slow response to dynamic obstacles and inadequate obstacle avoidance of tra-ditional local path planning algorithms,deep reinforcement learning is utilized to predict the move-ment trend of dynamic obstacles,leading to a dynamic fusion path planning.Finally,the simulation and experiment results demonstrate that the proposed improved IB-RRT∗algorithm has higher con-vergence speed and search efficiency compared with traditional Bi-RRT∗,Informed-RRT∗,and IB-RRT∗algorithms.Furthermore,the proposed fusion algorithm can effectively perform real-time obsta-cle avoidance and navigation tasks for mobile robots in unstructured environments. 展开更多
关键词 mobile robot improved IB-RRT∗algorithm deep reinforcement learning(DRL) real-time dynamic obstacle avoidance
在线阅读 下载PDF
上一页 1 2 129 下一页 到第
使用帮助 返回顶部