Lithology identificationwhile drilling technology can obtain rock information in real-time.However,traditional lithology identificationmodels often face limitations in feature extraction and adaptability to complex ge...Lithology identificationwhile drilling technology can obtain rock information in real-time.However,traditional lithology identificationmodels often face limitations in feature extraction and adaptability to complex geological conditions,limiting their accuracy in challenging environments.To address these challenges,a deep learning model for lithology identificationwhile drilling is proposed.The proposed model introduces a dual attention mechanism in the long short-term memory(LSTM)network,effectively enhancing the ability to capture spatial and channel dimension information.Subsequently,the crayfishoptimization algorithm(COA)is applied to optimize the model network structure,thereby enhancing its lithology identificationcapability.Laboratory test results demonstrate that the proposed model achieves 97.15%accuracy on the testing set,significantlyoutperforming the traditional support vector machine(SVM)method(81.77%).Field tests under actual drilling conditions demonstrate an average accuracy of 91.96%for the proposed model,representing a 14.31%improvement over the LSTM model alone.The proposed model demonstrates robust adaptability and generalization ability across diverse operational scenarios.This research offers reliable technical support for lithology identification while drilling.展开更多
BACKGROUND Esophageal squamous cell carcinoma is a major histological subtype of esophageal cancer.Many molecular genetic changes are associated with its occurrence.Raman spectroscopy has become a new method for the e...BACKGROUND Esophageal squamous cell carcinoma is a major histological subtype of esophageal cancer.Many molecular genetic changes are associated with its occurrence.Raman spectroscopy has become a new method for the early diagnosis of tumors because it can reflect the structures of substances and their changes at the molecular level.AIM To detect alterations in Raman spectral information across different stages of esophageal neoplasia.METHODS Different grades of esophageal lesions were collected,and a total of 360 groups of Raman spectrum data were collected.A 1D-transformer network model was proposed to handle the task of classifying the spectral data of esophageal squamous cell carcinoma.In addition,a deep learning model was applied to visualize the Raman spectral data and interpret their molecular characteristics.RESULTS A comparison among Raman spectral data with different pathological grades and a visual analysis revealed that the Raman peaks with significant differences were concentrated mainly at 1095 cm^(-1)(DNA,symmetric PO,and stretching vibration),1132 cm^(-1)(cytochrome c),1171 cm^(-1)(acetoacetate),1216 cm^(-1)(amide III),and 1315 cm^(-1)(glycerol).A comparison among the training results of different models revealed that the 1Dtransformer network performed best.A 93.30%accuracy value,a 96.65%specificity value,a 93.30%sensitivity value,and a 93.17%F1 score were achieved.CONCLUSION Raman spectroscopy revealed significantly different waveforms for the different stages of esophageal neoplasia.The combination of Raman spectroscopy and deep learning methods could significantly improve the accuracy of classification.展开更多
This paper investigates the optimization of data sampling and target labeling techniques to enhance algorithmic trading strategies in cryptocurrency markets,focusing on Bitcoin(BTC)and Ethereum(ETH).Traditional data s...This paper investigates the optimization of data sampling and target labeling techniques to enhance algorithmic trading strategies in cryptocurrency markets,focusing on Bitcoin(BTC)and Ethereum(ETH).Traditional data sampling methods,such as time bars,often fail to capture the nuances of the continuously active and highly volatile cryptocurrency market and force traders to wait for arbitrary points in time.To address this,we propose an alternative approach using information-driven sampling methods,including the CUSUM filter,range bars,volume bars,and dollar bars,and evaluate their performance using tick-level data from January 2018 to June 2023.Additionally,we introduce the Triple Barrier method for target labeling,which offers a solution tailored for algorithmic trading as opposed to the widely used next-bar prediction.We empirically assess the effectiveness of these data sampling and labeling methods to craft profitable trading strategies.The results demonstrate that the innovative combination of CUSUM-filtered data with Triple Barrier labeling outperforms traditional time bars and next-bar prediction,achieving consistently positive trading performance even after accounting for transaction costs.Moreover,our system enables making trading decisions at any point in time on the basis of market conditions,providing an advantage over traditional methods that rely on fixed time intervals.Furthermore,the paper contributes to the ongoing debate on the applicability of Transformer models to time series classification in the context of algorithmic trading by evaluating various Transformer architectures—including the vanilla Transformer encoder,FEDformer,and Autoformer—alongside other deep learning architectures and classical machine learning models,revealing insights into their relative performance.展开更多
Deep neural networks are increasingly exposed to attack threats,and at the same time,the need for privacy protection is growing.As a result,the challenge of developing neural networks that are both robust and capable ...Deep neural networks are increasingly exposed to attack threats,and at the same time,the need for privacy protection is growing.As a result,the challenge of developing neural networks that are both robust and capable of strong generalization while maintaining privacy becomes pressing.Training neural networks under privacy constraints is one way to minimize privacy leakage,and one way to do this is to add noise to the data or model.However,noise may cause gradient directions to deviate from the optimal trajectory during training,leading to unstable parameter updates,slow convergence,and reduced model generalization capability.To overcome these challenges,we propose an optimization algorithm based on double-integral coevolutionary neurodynamics(DICND),designed to accelerate convergence and improve generalization in noisy conditions.Theoretical analysis proves the global convergence of the DICND algorithm and demonstrates its ability to converge to near-global minima efficiently under noisy conditions.Numerical simulations and image classification experiments further confirm the DICND algorithm's significant advantages in enhancing generalization performance.展开更多
Deep learning algorithm is an effective data mining method and has been used in many fields to solve practical problems.However,the deep learning algorithms often contain some hyper-parameters which may be continuous,...Deep learning algorithm is an effective data mining method and has been used in many fields to solve practical problems.However,the deep learning algorithms often contain some hyper-parameters which may be continuous,integer,or mixed,and are often given based on experience but largely affect the effectiveness of activity recognition.In order to adapt to different hyper-parameter optimization problems,our improved Cuckoo Search(CS)algorithm is proposed to optimize the mixed hyper-parameters in deep learning algorithm.The algorithm optimizes the hyper-parameters in the deep learning model robustly,and intelligently selects the combination of integer type and continuous hyper-parameters that make the model optimal.Then,the mixed hyper-parameter in Convolutional Neural Network(CNN),Long-Short-Term Memory(LSTM)and CNN-LSTM are optimized based on the methodology on the smart home activity recognition datasets.Results show that the methodology can improve the performance of the deep learning model and whether we are experienced or not,we can get a better deep learning model using our method.展开更多
The potential applications of multimodal physiological signals in healthcare,pain monitoring,and clinical decision support systems have garnered significant attention in biomedical research.Subjective self-reporting i...The potential applications of multimodal physiological signals in healthcare,pain monitoring,and clinical decision support systems have garnered significant attention in biomedical research.Subjective self-reporting is the foundation of conventional pain assessment methods,which may be unreliable.Deep learning is a promising alternative to resolve this limitation through automated pain classification.This paper proposes an ensemble deep-learning framework for pain assessment.The framework makes use of features collected from electromyography(EMG),skin conductance level(SCL),and electrocardiography(ECG)signals.We integrate Convolutional Neural Networks(CNN),Long Short-Term Memory Networks(LSTM),Bidirectional Gated Recurrent Units(BiGRU),and Deep Neural Networks(DNN)models.We then aggregate their predictions using a weighted averaging ensemble technique to increase the classification’s robustness.To improve computing efficiency and remove redundant features,we use Particle Swarm Optimization(PSO)for feature selection.This enables us to reduce the features’dimensionality without sacrificing the classification’s accuracy.With improved accuracy,precision,recall,and F1-score across all pain levels,the experimental results show that the suggested ensemble model performs better than individual deep learning classifiers.In our experiments,the suggested model achieved over 98%accuracy,suggesting promising automated pain assessment performance.However,due to differences in validation protocols,comparisons with previous studies are still limited.Combining deep learning and feature selection techniques significantly improves model generalization,reducing overfitting and enhancing classification performance.The evaluation was conducted using the BioVid Heat Pain Dataset,confirming the model’s effectiveness in distinguishing between different pain intensity levels.展开更多
Heart disease prediction is a critical issue in healthcare,where accurate early diagnosis can save lives and reduce healthcare costs.The problem is inherently complex due to the high dimensionality of medical data,irr...Heart disease prediction is a critical issue in healthcare,where accurate early diagnosis can save lives and reduce healthcare costs.The problem is inherently complex due to the high dimensionality of medical data,irrelevant or redundant features,and the variability in risk factors such as age,lifestyle,andmedical history.These challenges often lead to inefficient and less accuratemodels.Traditional predictionmethodologies face limitations in effectively handling large feature sets and optimizing classification performance,which can result in overfitting poor generalization,and high computational cost.This work proposes a novel classification model for heart disease prediction that addresses these challenges by integrating feature selection through a Genetic Algorithm(GA)with an ensemble deep learning approach optimized using the Tunicate Swarm Algorithm(TSA).GA selects the most relevant features,reducing dimensionality and improvingmodel efficiency.Theselected features are then used to train an ensemble of deep learning models,where the TSA optimizes the weight of each model in the ensemble to enhance prediction accuracy.This hybrid approach addresses key challenges in the field,such as high dimensionality,redundant features,and classification performance,by introducing an efficient feature selection mechanism and optimizing the weighting of deep learning models in the ensemble.These enhancements result in a model that achieves superior accuracy,generalization,and efficiency compared to traditional methods.The proposed model demonstrated notable advancements in both prediction accuracy and computational efficiency over traditionalmodels.Specifically,it achieved an accuracy of 97.5%,a sensitivity of 97.2%,and a specificity of 97.8%.Additionally,with a 60-40 data split and 5-fold cross-validation,the model showed a significant reduction in training time(90 s),memory consumption(950 MB),and CPU usage(80%),highlighting its effectiveness in processing large,complex medical datasets for heart disease prediction.展开更多
Computer-vision and deep-learning techniques are widely applied to detect,monitor,and assess pavement conditions including road crack detection.Traditional methods fail to achieve satisfactory accuracy and generalizat...Computer-vision and deep-learning techniques are widely applied to detect,monitor,and assess pavement conditions including road crack detection.Traditional methods fail to achieve satisfactory accuracy and generalization performance in for crack detection.Complex network model can generate redundant feature maps and computational complexity.Therefore,this paper proposes a novel model compression framework based on deep learning to detect road cracks,which can improve the detection efficiency and accuracy.A distillation loss function is proposed to compress the teacher model,followed by channel pruning.Meanwhile,a multi-dilation model is proposed to improve the accuracy of the model pruned.The proposed method is tested on the public database CrackForest dataset(CFD).The experimental results show that the proposed method is more efficient and accurate than other state-of-art methods.展开更多
Recent studies employing deep learning to solve the traveling salesman problem(TSP)have mainly focused on learning construction heuristics.Such methods can improve TSP solutions,but still depend on additional programs...Recent studies employing deep learning to solve the traveling salesman problem(TSP)have mainly focused on learning construction heuristics.Such methods can improve TSP solutions,but still depend on additional programs.However,methods that focus on learning improvement heuristics to iteratively refine solutions remain insufficient.Traditional improvement heuristics are guided by a manually designed search strategy and may only achieve limited improvements.This paper proposes a novel framework for learning improvement heuristics,which automatically discovers better improvement policies for heuristics to iteratively solve the TSP.Our framework first designs a new architecture based on a transformer model to make the policy network parameterized,which introduces an action-dropout layer to prevent action selection from overfitting.It then proposes a deep reinforcement learning approach integrating a simulated annealing mechanism(named RL-SA)to learn the pairwise selected policy,aiming to improve the 2-opt algorithm's performance.The RL-SA leverages the whale optimization algorithm to generate initial solutions for better sampling efficiency and uses the Gaussian perturbation strategy to tackle the sparse reward problem of reinforcement learning.The experiment results show that the proposed approach is significantly superior to the state-of-the-art learning-based methods,and further reduces the gap between learning-based methods and highly optimized solvers in the benchmark datasets.Moreover,our pre-trained model M can be applied to guide the SA algorithm(named M-SA(ours)),which performs better than existing deep models in small-,medium-,and large-scale TSPLIB datasets.Additionally,the M-SA(ours)achieves excellent generalization performance in a real-world dataset on global liner shipping routes,with the optimization percentages in distance reduction ranging from3.52%to 17.99%.展开更多
With the increasing importance of supply chain transparency,blockchain-based data has emerged as a valuable and verifiable source for analyzing procurement transaction risks.This study extends the mathematical model a...With the increasing importance of supply chain transparency,blockchain-based data has emerged as a valuable and verifiable source for analyzing procurement transaction risks.This study extends the mathematical model and proof of‘the Overall Performance Characteristics of the Supply Chain’to encompass multiple variables within blockchain data.Utilizing graph theory,the model is further developed into a single-layer neural network,which serves as the foundation for constructing two multi-layer deep learning neural network models,Feedforward Neural Network(abbreviated as FNN)and Deep Clustering Network(abbreviated as DCN).Furthermore,this study retrieves corporate data from the Chunghwa Yellow Pages online resource and Taiwan Economic Journal database(abbreviated as TEJ).These data are then virtualized using‘the Metaverse Algorithm’,and the selected virtualized blockchain variables are utilized to train a neural network model for classification.The results demonstrate that a single-layer neural network model,leveraging blockchain data and employing the Proof of Relation algorithm(abbreviated as PoR)as the activation function,effectively identifies anomalous enterprises,which constitute 7.2%of the total sample,aligning with expectations.In contrast,the multi-layer neural network models,DCN and FNN,classify an excessively large proportion of enterprises as anomalous(ranging from one-fourth to one-third),which deviates from expectations.This indicates that deep learning may still be inadequate in effectively capturing or identifying malicious corporate behaviors associated with distortions in procurement transaction data.In other words,procurement transaction blockchain data possesses intrinsic value that cannot be replaced by artificial intelligence(abbreviated as AI).展开更多
Virtualization is an indispensable part of the cloud for the objective of deploying different virtual servers over the same physical layer.However,the increase in the number of applications executing on the repositori...Virtualization is an indispensable part of the cloud for the objective of deploying different virtual servers over the same physical layer.However,the increase in the number of applications executing on the repositories results in increased overload due to the adoption of cloud services.Moreover,the migration of applications on the cloud with optimized resource allocation is a herculean task even though it is employed for minimizing the dilemma of allocating resources.In this paper,a Fire Hawk Optimization enabled Deep Learning Scheme(FHOEDLS)is proposed for minimizing the overload and optimizing the resource allocation on the hybrid cloud container architecture for migrating interoperability based applications This FHOEDLS achieves the load prediction through the utilization of deep CNN-GRU-AM model for attaining resource allocation and better migration of applications.It specifically adopted the Fire Hawk Optimization Algorithm(FHOA)for optimizing the parameters that influence the factors that aid in better interoperable application migration with improved resource allocation and minimized overhead.It considered the factors of resource capacity,transmission cost,demand,and predicted load into account during the formulation of the objective function utilized for resource allocation and application migration.The cloud simulation of this FHOEDLS is achieved using a container,Virtual Machine(VM),and Physical Machine(PM).The results of this proposed FHOEDLS confirmed a better resource capability of 0.418 and a minimized load of 0.0061.展开更多
The exponential growth of Internet of Things(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet...The exponential growth of Internet of Things(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet the stringent latency requirements of modern IoT systems,while pure edge computing faces resource constraints that limit processing capabilities.This paper addresses these challenges by proposing a novel Deep Reinforcement Learning(DRL)-enhanced priority-based scheduling framework for hybrid edge-cloud computing environments.Our approach integrates adaptive priority assignment with a two-level concurrency control protocol that ensures both optimal performance and data consistency.The framework introduces three key innovations:(1)a DRL-based dynamic priority assignmentmechanism that learns fromsystem behavior,(2)a hybrid concurrency control protocol combining local edge validation with global cloud coordination,and(3)an integrated mathematical model that formalizes sensor-driven transactions across edge-cloud architectures.Extensive simulations across diverse workload scenarios demonstrate significant quantitative improvements:40%latency reduction,25%throughput increase,85%resource utilization(compared to 60%for heuristicmethods),40%reduction in energy consumption(300 vs.500 J per task),and 50%improvement in scalability factor(1.8 vs.1.2 for EDF)compared to state-of-the-art heuristic and meta-heuristic approaches.These results establish the framework as a robust solution for large-scale IoT and autonomous applications requiring real-time processing with consistency guarantees.展开更多
Planetary surfaces,shaped by billions of years of geologic evolution,display numerous impact craters whose distribution of size,density,and spatial arrangement reveals the celestial body's history.Identifying thes...Planetary surfaces,shaped by billions of years of geologic evolution,display numerous impact craters whose distribution of size,density,and spatial arrangement reveals the celestial body's history.Identifying these craters is essential for planetary science and is currently mainly achieved with deep learning-driven detection algorithms.However,because impact crater characteristics are substantially affected by the geologic environment,surface materials,and atmospheric conditions,the performance of deep learning models can be inconsistent between celestial bodies.In this paper,we first examine how the surface characteristics of the Moon,Mars,and Earth,along with the differences in their impact crater features,affect model performance.Then,we compare crater detection across celestial bodies by analyzing enhanced convolutional neural networks and U-shaped Convolutional Neural Network-based models to highlight how geology,data,and model design affect accuracy and generalization.Finally,we address current deep learning challenges,suggest directions for model improvement,such as multimodal data fusion and cross-planet learning and list available impact crater databases.This review can provide necessary technical support for deep space exploration and planetary science,as well as new ideas and directions for future research on automatic detection of impact craters on celestial body surfaces and on planetary geology.展开更多
The integration of IoT and Deep Learning(DL)has significantly advanced real-time health monitoring and predictive maintenance in prognostic and health management(PHM).Electrocardiograms(ECGs)are widely used for cardio...The integration of IoT and Deep Learning(DL)has significantly advanced real-time health monitoring and predictive maintenance in prognostic and health management(PHM).Electrocardiograms(ECGs)are widely used for cardiovascular disease(CVD)diagnosis,but fluctuating signal patterns make classification challenging.Computer-assisted automated diagnostic tools that enhance ECG signal categorization using sophisticated algorithms and machine learning are helping healthcare practitioners manage greater patient populations.With this motivation,the study proposes a DL framework leveraging the PTB-XL ECG dataset to improve CVD diagnosis.Deep Transfer Learning(DTL)techniques extract features,followed by feature fusion to eliminate redundancy and retain the most informative features.Utilizing the African Vulture Optimization Algorithm(AVOA)for feature selection is more effective than the standard methods,as it offers an ideal balance between exploration and exploitation that results in an optimal set of features,improving classification performance while reducing redundancy.Various machine learning classifiers,including Support Vector Machine(SVM),eXtreme Gradient Boosting(XGBoost),Adaptive Boosting(AdaBoost),and Extreme Learning Machine(ELM),are used for further classification.Additionally,an ensemble model is developed to further improve accuracy.Experimental results demonstrate that the proposed model achieves the highest accuracy of 96.31%,highlighting its effectiveness in enhancing CVD diagnosis.展开更多
The overall performance of multi-robot collaborative systems is significantly affected by the multi-robot task allocation.To improve the effectiveness,robustness,and safety of multi-robot collaborative systems,a multi...The overall performance of multi-robot collaborative systems is significantly affected by the multi-robot task allocation.To improve the effectiveness,robustness,and safety of multi-robot collaborative systems,a multimodal multi-objective evolutionary algorithm based on deep reinforcement learning is proposed in this paper.The improved multimodal multi-objective evolutionary algorithm is used to solve multi-robot task allo-cation problems.Moreover,a deep reinforcement learning strategy is used in the last generation to provide a high-quality path for each assigned robot via an end-to-end manner.Comparisons with three popular multimodal multi-objective evolutionary algorithms on three different scenarios of multi-robot task allocation problems are carried out to verify the performance of the proposed algorithm.The experimental test results show that the proposed algorithm can generate sufficient equivalent schemes to improve the availability and robustness of multi-robot collaborative systems in uncertain environments,and also produce the best scheme to improve the overall task execution efficiency of multi-robot collaborative systems.展开更多
The issue of small-angle maneuvering targets inverse synthetic aperture radar(ISAR)imaging has been successfully addressed by popular motion compensation algorithms.However,when the target’s rotational velocity is su...The issue of small-angle maneuvering targets inverse synthetic aperture radar(ISAR)imaging has been successfully addressed by popular motion compensation algorithms.However,when the target’s rotational velocity is sufficiently high during the dwell time of the radar,such compensation algorithms cannot obtain a high quality image.This paper proposes an ISAR imaging algorithm based on keystone transform and deep learning algorithm.The keystone transform is used to coarsely compensate for the target’s rotational motion and translational motion,and the deep learning algorithm is used to achieve a super-resolution image.The uniformly distributed point target data are used as the data set of the training u-net network.In addition,this method does not require estimating the motion parameters of the target,which simplifies the algorithm steps.Finally,several experiments are performed to demonstrate the effectiveness of the proposed algorithm.展开更多
Objective To develope a deep learning algorithm for pathological classification of chronic gastritis and assess its performance using whole-slide images(WSIs).Methods We retrospectively collected 1,250 gastric biopsy ...Objective To develope a deep learning algorithm for pathological classification of chronic gastritis and assess its performance using whole-slide images(WSIs).Methods We retrospectively collected 1,250 gastric biopsy specimens(1,128 gastritis,122 normal mucosa)from PLA General Hospital.The deep learning algorithm based on DeepLab v3(ResNet-50)architecture was trained and validated using 1,008 WSIs and 100 WSIs,respectively.The diagnostic performance of the algorithm was tested on an independent test set of 142 WSIs,with the pathologists’consensus diagnosis as the gold standard.Results The receiver operating characteristic(ROC)curves were generated for chronic superficial gastritis(CSuG),chronic active gastritis(CAcG),and chronic atrophic gastritis(CAtG)in the test set,respectively.The areas under the ROC curves(AUCs)of the algorithm for CSuG,CAcG,and CAtG were 0.882,0.905 and 0.910,respectively.The sensitivity and specificity of the deep learning algorithm for the classification of CSuG,CAcG,and CAtG were 0.790 and 1.000(accuracy 0.880),0.985 and 0.829(accuracy 0.901),0.952 and 0.992(accuracy 0.986),respectively.The overall predicted accuracy for three different types of gastritis was 0.867.By flagging the suspicious regions identified by the algorithm in WSI,a more transparent and interpretable diagnosis can be generated.Conclusion The deep learning algorithm achieved high accuracy for chronic gastritis classification using WSIs.By pre-highlighting the different gastritis regions,it might be used as an auxiliary diagnostic tool to improve the work efficiency of pathologists.展开更多
The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We...The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We propose a genetic algorithm(GA) based deep belief neural network(DBNN) method for robot object recognition and grasping purpose. This method optimizes the parameters of the DBNN method, such as the number of hidden units, the number of epochs, and the learning rates, which would reduce the error rate and the network training time of object recognition. After recognizing objects, the robot performs the pick-andplace operations. We build a database of six objects for experimental purpose. Experimental results demonstrate that our method outperforms on the optimized robot object recognition and grasping tasks.展开更多
The Covid-19 epidemic poses a serious public health threat to the world,where people with little or no pre-existing human immunity can be more vulnerable to its effects.Thus,developing surveillance systems for predict...The Covid-19 epidemic poses a serious public health threat to the world,where people with little or no pre-existing human immunity can be more vulnerable to its effects.Thus,developing surveillance systems for predicting the Covid-19 pandemic at an early stage could save millions of lives.In this study,a deep learning algorithm and a Holt-trend model are proposed to predict the coronavirus.The Long-Short Term Memory(LSTM)and Holttrend algorithms were applied to predict confirmed numbers and death cases.The real time data used has been collected from theWorld Health Organization(WHO).In the proposed research,we have considered three countries to test the proposed model,namely Saudi Arabia,Spain and Italy.The results suggest that the LSTM models show better performance in predicting the cases of coronavirus patients.Standard measure performance Mean squared Error(MSE),Root Mean Squared Error(RMSE),Mean error and correlation are employed to estimate the results of the proposed models.The empirical results of the LSTM,using the correlation metrics,are 99.94%,99.94%and 99.91%in predicting the number of confirmed cases in the three countries.As far as the results of the LSTM model in predicting the number of death of Covid-19,they are 99.86%,98.876%and 99.16%with respect to Saudi Arabia,Italy and Spain respectively.Similarly,the experiment’s results of the Holt-Trend model in predicting the number of confirmed cases of Covid-19,using the correlation metrics,are 99.06%,99.96%and 99.94%,whereas the results of the Holt-Trend model in predicting the number of death cases are 99.80%,99.96%and 99.94%with respect to the Saudi Arabia,Italy and Spain respectively.The empirical results indicate the efficient performance of the presented model in predicting the number of confirmed and death cases of Covid-19 in these countries.Such findings provide better insights regarding the future of Covid-19 this pandemic in general.The results were obtained by applying time series models,which need to be considered for the sake of saving the lives of many people.展开更多
基金supported by the National Key Research and Development Program for Young Scientists,Chin(Grant No.2021YFC2900400)the Sichuan-Chongqing Science and Technology Innovation Cooperation Program Project,China(Grant No.2024TIAD-CYKJCXX0269)the National Natural Science Foundation of China,China(Grant No.52304123).
文摘Lithology identificationwhile drilling technology can obtain rock information in real-time.However,traditional lithology identificationmodels often face limitations in feature extraction and adaptability to complex geological conditions,limiting their accuracy in challenging environments.To address these challenges,a deep learning model for lithology identificationwhile drilling is proposed.The proposed model introduces a dual attention mechanism in the long short-term memory(LSTM)network,effectively enhancing the ability to capture spatial and channel dimension information.Subsequently,the crayfishoptimization algorithm(COA)is applied to optimize the model network structure,thereby enhancing its lithology identificationcapability.Laboratory test results demonstrate that the proposed model achieves 97.15%accuracy on the testing set,significantlyoutperforming the traditional support vector machine(SVM)method(81.77%).Field tests under actual drilling conditions demonstrate an average accuracy of 91.96%for the proposed model,representing a 14.31%improvement over the LSTM model alone.The proposed model demonstrates robust adaptability and generalization ability across diverse operational scenarios.This research offers reliable technical support for lithology identification while drilling.
基金Supported by Beijing Hospitals Authority Youth Programme,No.QML20200505.
文摘BACKGROUND Esophageal squamous cell carcinoma is a major histological subtype of esophageal cancer.Many molecular genetic changes are associated with its occurrence.Raman spectroscopy has become a new method for the early diagnosis of tumors because it can reflect the structures of substances and their changes at the molecular level.AIM To detect alterations in Raman spectral information across different stages of esophageal neoplasia.METHODS Different grades of esophageal lesions were collected,and a total of 360 groups of Raman spectrum data were collected.A 1D-transformer network model was proposed to handle the task of classifying the spectral data of esophageal squamous cell carcinoma.In addition,a deep learning model was applied to visualize the Raman spectral data and interpret their molecular characteristics.RESULTS A comparison among Raman spectral data with different pathological grades and a visual analysis revealed that the Raman peaks with significant differences were concentrated mainly at 1095 cm^(-1)(DNA,symmetric PO,and stretching vibration),1132 cm^(-1)(cytochrome c),1171 cm^(-1)(acetoacetate),1216 cm^(-1)(amide III),and 1315 cm^(-1)(glycerol).A comparison among the training results of different models revealed that the 1Dtransformer network performed best.A 93.30%accuracy value,a 96.65%specificity value,a 93.30%sensitivity value,and a 93.17%F1 score were achieved.CONCLUSION Raman spectroscopy revealed significantly different waveforms for the different stages of esophageal neoplasia.The combination of Raman spectroscopy and deep learning methods could significantly improve the accuracy of classification.
基金support of the University of Warsaw under’New Ideas 3B’competition in POB Ⅲ implemented under the’Excellence Initiative-Research University’Programme.
文摘This paper investigates the optimization of data sampling and target labeling techniques to enhance algorithmic trading strategies in cryptocurrency markets,focusing on Bitcoin(BTC)and Ethereum(ETH).Traditional data sampling methods,such as time bars,often fail to capture the nuances of the continuously active and highly volatile cryptocurrency market and force traders to wait for arbitrary points in time.To address this,we propose an alternative approach using information-driven sampling methods,including the CUSUM filter,range bars,volume bars,and dollar bars,and evaluate their performance using tick-level data from January 2018 to June 2023.Additionally,we introduce the Triple Barrier method for target labeling,which offers a solution tailored for algorithmic trading as opposed to the widely used next-bar prediction.We empirically assess the effectiveness of these data sampling and labeling methods to craft profitable trading strategies.The results demonstrate that the innovative combination of CUSUM-filtered data with Triple Barrier labeling outperforms traditional time bars and next-bar prediction,achieving consistently positive trading performance even after accounting for transaction costs.Moreover,our system enables making trading decisions at any point in time on the basis of market conditions,providing an advantage over traditional methods that rely on fixed time intervals.Furthermore,the paper contributes to the ongoing debate on the applicability of Transformer models to time series classification in the context of algorithmic trading by evaluating various Transformer architectures—including the vanilla Transformer encoder,FEDformer,and Autoformer—alongside other deep learning architectures and classical machine learning models,revealing insights into their relative performance.
基金supported by the National Natural Science Foundation of China(62394340,62394345,62473383).This work was carried out in part using computing resources at the High Performance Computing Center of Central South University。
文摘Deep neural networks are increasingly exposed to attack threats,and at the same time,the need for privacy protection is growing.As a result,the challenge of developing neural networks that are both robust and capable of strong generalization while maintaining privacy becomes pressing.Training neural networks under privacy constraints is one way to minimize privacy leakage,and one way to do this is to add noise to the data or model.However,noise may cause gradient directions to deviate from the optimal trajectory during training,leading to unstable parameter updates,slow convergence,and reduced model generalization capability.To overcome these challenges,we propose an optimization algorithm based on double-integral coevolutionary neurodynamics(DICND),designed to accelerate convergence and improve generalization in noisy conditions.Theoretical analysis proves the global convergence of the DICND algorithm and demonstrates its ability to converge to near-global minima efficiently under noisy conditions.Numerical simulations and image classification experiments further confirm the DICND algorithm's significant advantages in enhancing generalization performance.
基金Supported by the Anhui Province Sports Health Information Monitoring Technology Engineering Research Center Open Project (KF2023012)。
文摘Deep learning algorithm is an effective data mining method and has been used in many fields to solve practical problems.However,the deep learning algorithms often contain some hyper-parameters which may be continuous,integer,or mixed,and are often given based on experience but largely affect the effectiveness of activity recognition.In order to adapt to different hyper-parameter optimization problems,our improved Cuckoo Search(CS)algorithm is proposed to optimize the mixed hyper-parameters in deep learning algorithm.The algorithm optimizes the hyper-parameters in the deep learning model robustly,and intelligently selects the combination of integer type and continuous hyper-parameters that make the model optimal.Then,the mixed hyper-parameter in Convolutional Neural Network(CNN),Long-Short-Term Memory(LSTM)and CNN-LSTM are optimized based on the methodology on the smart home activity recognition datasets.Results show that the methodology can improve the performance of the deep learning model and whether we are experienced or not,we can get a better deep learning model using our method.
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.(DGSSR-2023-02-02341).
文摘The potential applications of multimodal physiological signals in healthcare,pain monitoring,and clinical decision support systems have garnered significant attention in biomedical research.Subjective self-reporting is the foundation of conventional pain assessment methods,which may be unreliable.Deep learning is a promising alternative to resolve this limitation through automated pain classification.This paper proposes an ensemble deep-learning framework for pain assessment.The framework makes use of features collected from electromyography(EMG),skin conductance level(SCL),and electrocardiography(ECG)signals.We integrate Convolutional Neural Networks(CNN),Long Short-Term Memory Networks(LSTM),Bidirectional Gated Recurrent Units(BiGRU),and Deep Neural Networks(DNN)models.We then aggregate their predictions using a weighted averaging ensemble technique to increase the classification’s robustness.To improve computing efficiency and remove redundant features,we use Particle Swarm Optimization(PSO)for feature selection.This enables us to reduce the features’dimensionality without sacrificing the classification’s accuracy.With improved accuracy,precision,recall,and F1-score across all pain levels,the experimental results show that the suggested ensemble model performs better than individual deep learning classifiers.In our experiments,the suggested model achieved over 98%accuracy,suggesting promising automated pain assessment performance.However,due to differences in validation protocols,comparisons with previous studies are still limited.Combining deep learning and feature selection techniques significantly improves model generalization,reducing overfitting and enhancing classification performance.The evaluation was conducted using the BioVid Heat Pain Dataset,confirming the model’s effectiveness in distinguishing between different pain intensity levels.
文摘Heart disease prediction is a critical issue in healthcare,where accurate early diagnosis can save lives and reduce healthcare costs.The problem is inherently complex due to the high dimensionality of medical data,irrelevant or redundant features,and the variability in risk factors such as age,lifestyle,andmedical history.These challenges often lead to inefficient and less accuratemodels.Traditional predictionmethodologies face limitations in effectively handling large feature sets and optimizing classification performance,which can result in overfitting poor generalization,and high computational cost.This work proposes a novel classification model for heart disease prediction that addresses these challenges by integrating feature selection through a Genetic Algorithm(GA)with an ensemble deep learning approach optimized using the Tunicate Swarm Algorithm(TSA).GA selects the most relevant features,reducing dimensionality and improvingmodel efficiency.Theselected features are then used to train an ensemble of deep learning models,where the TSA optimizes the weight of each model in the ensemble to enhance prediction accuracy.This hybrid approach addresses key challenges in the field,such as high dimensionality,redundant features,and classification performance,by introducing an efficient feature selection mechanism and optimizing the weighting of deep learning models in the ensemble.These enhancements result in a model that achieves superior accuracy,generalization,and efficiency compared to traditional methods.The proposed model demonstrated notable advancements in both prediction accuracy and computational efficiency over traditionalmodels.Specifically,it achieved an accuracy of 97.5%,a sensitivity of 97.2%,and a specificity of 97.8%.Additionally,with a 60-40 data split and 5-fold cross-validation,the model showed a significant reduction in training time(90 s),memory consumption(950 MB),and CPU usage(80%),highlighting its effectiveness in processing large,complex medical datasets for heart disease prediction.
基金supported in part by the Jiangsu Province Construction System Science and Technology Project(No.2024ZD056)the Research Development Fund of Xi’an Jiaotong-Liverpool University(No.RDF-24-01-097).
文摘Computer-vision and deep-learning techniques are widely applied to detect,monitor,and assess pavement conditions including road crack detection.Traditional methods fail to achieve satisfactory accuracy and generalization performance in for crack detection.Complex network model can generate redundant feature maps and computational complexity.Therefore,this paper proposes a novel model compression framework based on deep learning to detect road cracks,which can improve the detection efficiency and accuracy.A distillation loss function is proposed to compress the teacher model,followed by channel pruning.Meanwhile,a multi-dilation model is proposed to improve the accuracy of the model pruned.The proposed method is tested on the public database CrackForest dataset(CFD).The experimental results show that the proposed method is more efficient and accurate than other state-of-art methods.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.72101046 and 61672128)。
文摘Recent studies employing deep learning to solve the traveling salesman problem(TSP)have mainly focused on learning construction heuristics.Such methods can improve TSP solutions,but still depend on additional programs.However,methods that focus on learning improvement heuristics to iteratively refine solutions remain insufficient.Traditional improvement heuristics are guided by a manually designed search strategy and may only achieve limited improvements.This paper proposes a novel framework for learning improvement heuristics,which automatically discovers better improvement policies for heuristics to iteratively solve the TSP.Our framework first designs a new architecture based on a transformer model to make the policy network parameterized,which introduces an action-dropout layer to prevent action selection from overfitting.It then proposes a deep reinforcement learning approach integrating a simulated annealing mechanism(named RL-SA)to learn the pairwise selected policy,aiming to improve the 2-opt algorithm's performance.The RL-SA leverages the whale optimization algorithm to generate initial solutions for better sampling efficiency and uses the Gaussian perturbation strategy to tackle the sparse reward problem of reinforcement learning.The experiment results show that the proposed approach is significantly superior to the state-of-the-art learning-based methods,and further reduces the gap between learning-based methods and highly optimized solvers in the benchmark datasets.Moreover,our pre-trained model M can be applied to guide the SA algorithm(named M-SA(ours)),which performs better than existing deep models in small-,medium-,and large-scale TSPLIB datasets.Additionally,the M-SA(ours)achieves excellent generalization performance in a real-world dataset on global liner shipping routes,with the optimization percentages in distance reduction ranging from3.52%to 17.99%.
文摘With the increasing importance of supply chain transparency,blockchain-based data has emerged as a valuable and verifiable source for analyzing procurement transaction risks.This study extends the mathematical model and proof of‘the Overall Performance Characteristics of the Supply Chain’to encompass multiple variables within blockchain data.Utilizing graph theory,the model is further developed into a single-layer neural network,which serves as the foundation for constructing two multi-layer deep learning neural network models,Feedforward Neural Network(abbreviated as FNN)and Deep Clustering Network(abbreviated as DCN).Furthermore,this study retrieves corporate data from the Chunghwa Yellow Pages online resource and Taiwan Economic Journal database(abbreviated as TEJ).These data are then virtualized using‘the Metaverse Algorithm’,and the selected virtualized blockchain variables are utilized to train a neural network model for classification.The results demonstrate that a single-layer neural network model,leveraging blockchain data and employing the Proof of Relation algorithm(abbreviated as PoR)as the activation function,effectively identifies anomalous enterprises,which constitute 7.2%of the total sample,aligning with expectations.In contrast,the multi-layer neural network models,DCN and FNN,classify an excessively large proportion of enterprises as anomalous(ranging from one-fourth to one-third),which deviates from expectations.This indicates that deep learning may still be inadequate in effectively capturing or identifying malicious corporate behaviors associated with distortions in procurement transaction data.In other words,procurement transaction blockchain data possesses intrinsic value that cannot be replaced by artificial intelligence(abbreviated as AI).
文摘Virtualization is an indispensable part of the cloud for the objective of deploying different virtual servers over the same physical layer.However,the increase in the number of applications executing on the repositories results in increased overload due to the adoption of cloud services.Moreover,the migration of applications on the cloud with optimized resource allocation is a herculean task even though it is employed for minimizing the dilemma of allocating resources.In this paper,a Fire Hawk Optimization enabled Deep Learning Scheme(FHOEDLS)is proposed for minimizing the overload and optimizing the resource allocation on the hybrid cloud container architecture for migrating interoperability based applications This FHOEDLS achieves the load prediction through the utilization of deep CNN-GRU-AM model for attaining resource allocation and better migration of applications.It specifically adopted the Fire Hawk Optimization Algorithm(FHOA)for optimizing the parameters that influence the factors that aid in better interoperable application migration with improved resource allocation and minimized overhead.It considered the factors of resource capacity,transmission cost,demand,and predicted load into account during the formulation of the objective function utilized for resource allocation and application migration.The cloud simulation of this FHOEDLS is achieved using a container,Virtual Machine(VM),and Physical Machine(PM).The results of this proposed FHOEDLS confirmed a better resource capability of 0.418 and a minimized load of 0.0061.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R909),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The exponential growth of Internet of Things(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet the stringent latency requirements of modern IoT systems,while pure edge computing faces resource constraints that limit processing capabilities.This paper addresses these challenges by proposing a novel Deep Reinforcement Learning(DRL)-enhanced priority-based scheduling framework for hybrid edge-cloud computing environments.Our approach integrates adaptive priority assignment with a two-level concurrency control protocol that ensures both optimal performance and data consistency.The framework introduces three key innovations:(1)a DRL-based dynamic priority assignmentmechanism that learns fromsystem behavior,(2)a hybrid concurrency control protocol combining local edge validation with global cloud coordination,and(3)an integrated mathematical model that formalizes sensor-driven transactions across edge-cloud architectures.Extensive simulations across diverse workload scenarios demonstrate significant quantitative improvements:40%latency reduction,25%throughput increase,85%resource utilization(compared to 60%for heuristicmethods),40%reduction in energy consumption(300 vs.500 J per task),and 50%improvement in scalability factor(1.8 vs.1.2 for EDF)compared to state-of-the-art heuristic and meta-heuristic approaches.These results establish the framework as a robust solution for large-scale IoT and autonomous applications requiring real-time processing with consistency guarantees.
基金funded by the National Natural Science Foundation of China(12363009 and 12103020)Natural Science Foundation of Jiangxi Province(20224BAB211011)+1 种基金Youth Talent Project of Science and Technology Plan of Ganzhou(2022CXRC9191 and 2023CYZ26970)Jiangxi Province Graduate Innovation Special Funds Project(YC2024-S529 and YC2023-S672).
文摘Planetary surfaces,shaped by billions of years of geologic evolution,display numerous impact craters whose distribution of size,density,and spatial arrangement reveals the celestial body's history.Identifying these craters is essential for planetary science and is currently mainly achieved with deep learning-driven detection algorithms.However,because impact crater characteristics are substantially affected by the geologic environment,surface materials,and atmospheric conditions,the performance of deep learning models can be inconsistent between celestial bodies.In this paper,we first examine how the surface characteristics of the Moon,Mars,and Earth,along with the differences in their impact crater features,affect model performance.Then,we compare crater detection across celestial bodies by analyzing enhanced convolutional neural networks and U-shaped Convolutional Neural Network-based models to highlight how geology,data,and model design affect accuracy and generalization.Finally,we address current deep learning challenges,suggest directions for model improvement,such as multimodal data fusion and cross-planet learning and list available impact crater databases.This review can provide necessary technical support for deep space exploration and planetary science,as well as new ideas and directions for future research on automatic detection of impact craters on celestial body surfaces and on planetary geology.
基金funded by Researchers Supporting ProjectNumber(RSPD2025R947),King Saud University,Riyadh,Saudi Arabia.
文摘The integration of IoT and Deep Learning(DL)has significantly advanced real-time health monitoring and predictive maintenance in prognostic and health management(PHM).Electrocardiograms(ECGs)are widely used for cardiovascular disease(CVD)diagnosis,but fluctuating signal patterns make classification challenging.Computer-assisted automated diagnostic tools that enhance ECG signal categorization using sophisticated algorithms and machine learning are helping healthcare practitioners manage greater patient populations.With this motivation,the study proposes a DL framework leveraging the PTB-XL ECG dataset to improve CVD diagnosis.Deep Transfer Learning(DTL)techniques extract features,followed by feature fusion to eliminate redundancy and retain the most informative features.Utilizing the African Vulture Optimization Algorithm(AVOA)for feature selection is more effective than the standard methods,as it offers an ideal balance between exploration and exploitation that results in an optimal set of features,improving classification performance while reducing redundancy.Various machine learning classifiers,including Support Vector Machine(SVM),eXtreme Gradient Boosting(XGBoost),Adaptive Boosting(AdaBoost),and Extreme Learning Machine(ELM),are used for further classification.Additionally,an ensemble model is developed to further improve accuracy.Experimental results demonstrate that the proposed model achieves the highest accuracy of 96.31%,highlighting its effectiveness in enhancing CVD diagnosis.
基金the Shanghai Pujiang Program (No.22PJD030),the National Natural Science Foundation of China (Nos.61603244 and 71904116)the National Natural Science Foundation of China-Shandong Joint Fund (No.U2006228)。
文摘The overall performance of multi-robot collaborative systems is significantly affected by the multi-robot task allocation.To improve the effectiveness,robustness,and safety of multi-robot collaborative systems,a multimodal multi-objective evolutionary algorithm based on deep reinforcement learning is proposed in this paper.The improved multimodal multi-objective evolutionary algorithm is used to solve multi-robot task allo-cation problems.Moreover,a deep reinforcement learning strategy is used in the last generation to provide a high-quality path for each assigned robot via an end-to-end manner.Comparisons with three popular multimodal multi-objective evolutionary algorithms on three different scenarios of multi-robot task allocation problems are carried out to verify the performance of the proposed algorithm.The experimental test results show that the proposed algorithm can generate sufficient equivalent schemes to improve the availability and robustness of multi-robot collaborative systems in uncertain environments,and also produce the best scheme to improve the overall task execution efficiency of multi-robot collaborative systems.
基金This work was supported by the National Natural Science Foundation of China(61571388,61871465,62071414)the Project of Introducing Overseas Students in Hebei Province(C20200367).
文摘The issue of small-angle maneuvering targets inverse synthetic aperture radar(ISAR)imaging has been successfully addressed by popular motion compensation algorithms.However,when the target’s rotational velocity is sufficiently high during the dwell time of the radar,such compensation algorithms cannot obtain a high quality image.This paper proposes an ISAR imaging algorithm based on keystone transform and deep learning algorithm.The keystone transform is used to coarsely compensate for the target’s rotational motion and translational motion,and the deep learning algorithm is used to achieve a super-resolution image.The uniformly distributed point target data are used as the data set of the training u-net network.In addition,this method does not require estimating the motion parameters of the target,which simplifies the algorithm steps.Finally,several experiments are performed to demonstrate the effectiveness of the proposed algorithm.
文摘Objective To develope a deep learning algorithm for pathological classification of chronic gastritis and assess its performance using whole-slide images(WSIs).Methods We retrospectively collected 1,250 gastric biopsy specimens(1,128 gastritis,122 normal mucosa)from PLA General Hospital.The deep learning algorithm based on DeepLab v3(ResNet-50)architecture was trained and validated using 1,008 WSIs and 100 WSIs,respectively.The diagnostic performance of the algorithm was tested on an independent test set of 142 WSIs,with the pathologists’consensus diagnosis as the gold standard.Results The receiver operating characteristic(ROC)curves were generated for chronic superficial gastritis(CSuG),chronic active gastritis(CAcG),and chronic atrophic gastritis(CAtG)in the test set,respectively.The areas under the ROC curves(AUCs)of the algorithm for CSuG,CAcG,and CAtG were 0.882,0.905 and 0.910,respectively.The sensitivity and specificity of the deep learning algorithm for the classification of CSuG,CAcG,and CAtG were 0.790 and 1.000(accuracy 0.880),0.985 and 0.829(accuracy 0.901),0.952 and 0.992(accuracy 0.986),respectively.The overall predicted accuracy for three different types of gastritis was 0.867.By flagging the suspicious regions identified by the algorithm in WSI,a more transparent and interpretable diagnosis can be generated.Conclusion The deep learning algorithm achieved high accuracy for chronic gastritis classification using WSIs.By pre-highlighting the different gastritis regions,it might be used as an auxiliary diagnostic tool to improve the work efficiency of pathologists.
文摘The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We propose a genetic algorithm(GA) based deep belief neural network(DBNN) method for robot object recognition and grasping purpose. This method optimizes the parameters of the DBNN method, such as the number of hidden units, the number of epochs, and the learning rates, which would reduce the error rate and the network training time of object recognition. After recognizing objects, the robot performs the pick-andplace operations. We build a database of six objects for experimental purpose. Experimental results demonstrate that our method outperforms on the optimized robot object recognition and grasping tasks.
文摘The Covid-19 epidemic poses a serious public health threat to the world,where people with little or no pre-existing human immunity can be more vulnerable to its effects.Thus,developing surveillance systems for predicting the Covid-19 pandemic at an early stage could save millions of lives.In this study,a deep learning algorithm and a Holt-trend model are proposed to predict the coronavirus.The Long-Short Term Memory(LSTM)and Holttrend algorithms were applied to predict confirmed numbers and death cases.The real time data used has been collected from theWorld Health Organization(WHO).In the proposed research,we have considered three countries to test the proposed model,namely Saudi Arabia,Spain and Italy.The results suggest that the LSTM models show better performance in predicting the cases of coronavirus patients.Standard measure performance Mean squared Error(MSE),Root Mean Squared Error(RMSE),Mean error and correlation are employed to estimate the results of the proposed models.The empirical results of the LSTM,using the correlation metrics,are 99.94%,99.94%and 99.91%in predicting the number of confirmed cases in the three countries.As far as the results of the LSTM model in predicting the number of death of Covid-19,they are 99.86%,98.876%and 99.16%with respect to Saudi Arabia,Italy and Spain respectively.Similarly,the experiment’s results of the Holt-Trend model in predicting the number of confirmed cases of Covid-19,using the correlation metrics,are 99.06%,99.96%and 99.94%,whereas the results of the Holt-Trend model in predicting the number of death cases are 99.80%,99.96%and 99.94%with respect to the Saudi Arabia,Italy and Spain respectively.The empirical results indicate the efficient performance of the presented model in predicting the number of confirmed and death cases of Covid-19 in these countries.Such findings provide better insights regarding the future of Covid-19 this pandemic in general.The results were obtained by applying time series models,which need to be considered for the sake of saving the lives of many people.