期刊文献+
共找到43篇文章
< 1 2 3 >
每页显示 20 50 100
Enhancing Fire Detection with YOLO Models:A Bayesian Hyperparameter Tuning Approach
1
作者 Van-Ha Hoang Jong Weon Lee Chun-Su Park 《Computers, Materials & Continua》 2025年第6期4097-4116,共20页
Fire can cause significant damage to the environment,economy,and human lives.If fire can be detected early,the damage can be minimized.Advances in technology,particularly in computer vision powered by deep learning,ha... Fire can cause significant damage to the environment,economy,and human lives.If fire can be detected early,the damage can be minimized.Advances in technology,particularly in computer vision powered by deep learning,have enabled automated fire detection in images and videos.Several deep learning models have been developed for object detection,including applications in fire and smoke detection.This study focuses on optimizing the training hyperparameters of YOLOv8 andYOLOv10models usingBayesianTuning(BT).Experimental results on the large-scale D-Fire dataset demonstrate that this approach enhances detection performance.Specifically,the proposed approach improves the mean average precision at an Intersection over Union(IoU)threshold of 0.5(mAP50)of the YOLOv8s,YOLOv10s,YOLOv8l,and YOLOv10lmodels by 0.26,0.21,0.84,and 0.63,respectively,compared tomodels trainedwith the default hyperparameters.The performance gains are more pronounced in larger models,YOLOv8l and YOLOv10l,than in their smaller counterparts,YOLOv8s and YOLOv10s.Furthermore,YOLOv8 models consistently outperform YOLOv10,with mAP50 improvements of 0.26 for YOLOv8s over YOLOv10s and 0.65 for YOLOv8l over YOLOv10l when trained with BT.These results establish YOLOv8 as the preferred model for fire detection applications where detection performance is prioritized. 展开更多
关键词 Fire detection smoke detection deep learning YOLO Bayesian hyperparameter tuning hyperparameter optimization Optuna
在线阅读 下载PDF
Hybrid XGBoost model with hyperparameter tuning for prediction of liver disease with better accuracy 被引量:2
2
作者 Surjeet Dalal Edeh Michael Onyema Amit Malik 《World Journal of Gastroenterology》 SCIE CAS 2022年第46期6551-6563,共13页
BACKGROUND Liver disease indicates any pathology that can harm or destroy the liver or prevent it from normal functioning.The global community has recently witnessed an increase in the mortality rate due to liver dise... BACKGROUND Liver disease indicates any pathology that can harm or destroy the liver or prevent it from normal functioning.The global community has recently witnessed an increase in the mortality rate due to liver disease.This could be attributed to many factors,among which are human habits,awareness issues,poor healthcare,and late detection.To curb the growing threats from liver disease,early detection is critical to help reduce the risks and improve treatment outcome.Emerging technologies such as machine learning,as shown in this study,could be deployed to assist in enhancing its prediction and treatment.AIM To present a more efficient system for timely prediction of liver disease using a hybrid eXtreme Gradient Boosting model with hyperparameter tuning with a view to assist in early detection,diagnosis,and reduction of risks and mortality associated with the disease.METHODS The dataset used in this study consisted of 416 people with liver problems and 167 with no such history.The data were collected from the state of Andhra Pradesh,India,through https://www.kaggle.com/datasets/uciml/indian-liver-patientrecords.The population was divided into two sets depending on the disease state of the patient.This binary information was recorded in the attribute"is_patient".RESULTS The results indicated that the chi-square automated interaction detection and classification and regression trees models achieved an accuracy level of 71.36%and 73.24%,respectively,which was much better than the conventional method.The proposed solution would assist patients and physicians in tackling the problem of liver disease and ensuring that cases are detected early to prevent it from developing into cirrhosis(scarring)and to enhance the survival of patients.The study showed the potential of machine learning in health care,especially as it concerns disease prediction and monitoring.CONCLUSION This study contributed to the knowledge of machine learning application to health and to the efforts toward combating the problem of liver disease.However,relevant authorities have to invest more into machine learning research and other health technologies to maximize their potential. 展开更多
关键词 Liver infection Machine learning Chi-square automated interaction detection Classification and regression trees Decision tree XGBoost hyperparameter tuning
在线阅读 下载PDF
Hyperparameter Tuning Based Machine Learning Classifier for Breast Cancer Prediction
3
作者 Mohammed Mijanur Rahman Asikur Rahman +1 位作者 Swarnali Akter Sumiea Akter Pinky 《Journal of Computer and Communications》 2023年第4期149-165,共17页
Currently, the second most devastating form of cancer in people, particularly in women, is Breast Cancer (BC). In the healthcare industry, Machine Learning (ML) is commonly employed in fatal disease prediction. Due to... Currently, the second most devastating form of cancer in people, particularly in women, is Breast Cancer (BC). In the healthcare industry, Machine Learning (ML) is commonly employed in fatal disease prediction. Due to breast cancer’s favourable prognosis at an early stage, a model is created to utilize the Dataset on Wisconsin Diagnostic Breast Cancer (WDBC). Conversely, this model’s overarching axiom is to compare the effectiveness of five well-known ML classifiers, including Logistic Regression (LR), Decision Tree (DT), Random Forest (RF), K-Nearest Neighbor (KNN), and Naive Bayes (NB) with the conventional method. To counterbalance the effect with conventional methods, the overarching tactic we utilized was hyperparameter tuning utilizing the grid search method, which improved accuracy, secondary precision, third recall, F1 score and finally the AUC & ROC curve. In this study of hyperparameter tuning model, the rate of accuracy increased from 94.15% to 98.83% whereas the accuracy of the conventional method increased from 93.56% to 97.08%. According to this investigation, KNN outperformed all other classifiers in terms of accuracy, achieving a score of 98.83%. In conclusion, our study shows that KNN works well with the hyper-tuning method. These analyses show that this study prediction approach is useful in prognosticating women with breast cancer with a viable performance and more accurate findings when compared to the conventional approach. 展开更多
关键词 Machine Learning Breast Cancer Prediction Grid Search hyperparameter tuning
暂未订购
Grid Search for Predicting Coronary Heart Disease by Tuning Hyper-Parameters 被引量:2
4
作者 S.Prabu B.Thiyaneswaran +2 位作者 M.Sujatha C.Nalini Sujatha Rajkumar 《Computer Systems Science & Engineering》 SCIE EI 2022年第11期737-749,共13页
Diagnosing the cardiovascular disease is one of the biggest medical difficulties in recent years.Coronary cardiovascular(CHD)is a kind of heart and blood vascular disease.Predicting this sort of cardiac illness leads ... Diagnosing the cardiovascular disease is one of the biggest medical difficulties in recent years.Coronary cardiovascular(CHD)is a kind of heart and blood vascular disease.Predicting this sort of cardiac illness leads to more precise decisions for cardiac disorders.Implementing Grid Search Optimization(GSO)machine training models is therefore a useful way to forecast the sickness as soon as possible.The state-of-the-art work is the tuning of the hyperparameter together with the selection of the feature by utilizing the model search to minimize the false-negative rate.Three models with a cross-validation approach do the required task.Feature Selection based on the use of statistical and correlation matrices for multivariate analysis.For Random Search and Grid Search models,extensive comparison findings are produced utilizing retrieval,F1 score,and precision measurements.The models are evaluated using the metrics and kappa statistics that illustrate the three models’comparability.The study effort focuses on optimizing function selection,tweaking hyperparameters to improve model accuracy and the prediction of heart disease by examining Framingham datasets using random forestry classification.Tuning the hyperparameter in the model of grid search thus decreases the erroneous rate achieves global optimization. 展开更多
关键词 Grid search coronary heart disease(CHD) machine learning feature selection hyperparameter tuning
在线阅读 下载PDF
Energy Efficient Hyperparameter Tuned Deep Neural Network to Improve Accuracy of Near-Threshold Processor
5
作者 K.Chanthirasekaran Raghu Gundaala 《Intelligent Automation & Soft Computing》 SCIE 2023年第7期471-489,共19页
When it comes to decreasing margins and increasing energy effi-ciency in near-threshold and sub-threshold processors,timing error resilience may be viewed as a potentially lucrative alternative to examine.On the other... When it comes to decreasing margins and increasing energy effi-ciency in near-threshold and sub-threshold processors,timing error resilience may be viewed as a potentially lucrative alternative to examine.On the other hand,the currently employed approaches have certain restrictions,including high levels of design complexity,severe time constraints on error consolidation and propagation,and uncontaminated architectural registers(ARs).The design of near-threshold circuits,often known as NT circuits,is becoming the approach of choice for the construction of energy-efficient digital circuits.As a result of the exponentially decreased driving current,there was a reduction in performance,which was one of the downsides.Numerous studies have advised the use of NT techniques to chip multiprocessors as a means to preserve outstanding energy efficiency while minimising performance loss.Over the past several years,there has been a clear growth in interest in the development of artificial intelligence hardware with low energy consumption(AI).This has resulted in both large corporations and start-ups producing items that compete on the basis of varying degrees of performance and energy use.This technology’s ultimate goal was to provide levels of efficiency and performance that could not be achieved with graphics processing units or general-purpose CPUs.To achieve this objective,the technology was created to integrate several processing units into a single chip.To accomplish this purpose,the hardware was designed with a number of unique properties.In this study,an Energy Effi-cient Hyperparameter Tuned Deep Neural Network(EEHPT-DNN)model for Variation-Tolerant Near-Threshold Processor was developed.In order to improve the energy efficiency of artificial intelligence(AI),the EEHPT-DNN model employs several AI techniques.The notion focuses mostly on the repercussions of embedded technologies positioned at the network’s edge.The presented model employs a deep stacked sparse autoencoder(DSSAE)model with the objective of creating a variation-tolerant NT processor.The time-consuming method of modifying hyperparameters through trial and error is substituted with the marine predators optimization algorithm(MPO).This method is utilised to modify the hyperparameters associated with the DSSAE model.To validate that the proposed EEHPT-DNN model has a higher degree of functionality,a full simulation study is conducted,and the results are analysed from a variety of perspectives.This was completed so that the enhanced performance could be evaluated and analysed.According to the results of the study that compared numerous DL models,the EEHPT-DNN model performed significantly better than the other models. 展开更多
关键词 Deep learning hyperparameter tuning artificial intelligence near-threshold processor embedded system
在线阅读 下载PDF
Abstractive Arabic Text Summarization Using Hyperparameter Tuned Denoising Deep Neural Network
6
作者 Ibrahim M.Alwayle Hala J.Alshahrani +5 位作者 Saud S.Alotaibi Khaled M.Alalayah Amira Sayed A.Aziz Khadija M.Alaidarous Ibrahim Abdulrab Ahmed Manar Ahmed Hamza 《Intelligent Automation & Soft Computing》 2023年第11期153-168,共16页
ive Arabic Text Summarization using Hyperparameter Tuned Denoising Deep Neural Network(AATS-HTDDNN)technique.The presented AATS-HTDDNN technique aims to generate summaries of Arabic text.In the presented AATS-HTDDNN t... ive Arabic Text Summarization using Hyperparameter Tuned Denoising Deep Neural Network(AATS-HTDDNN)technique.The presented AATS-HTDDNN technique aims to generate summaries of Arabic text.In the presented AATS-HTDDNN technique,the DDNN model is utilized to generate the summary.This study exploits the Chameleon Swarm Optimization(CSO)algorithm to fine-tune the hyperparameters relevant to the DDNN model since it considerably affects the summarization efficiency.This phase shows the novelty of the current study.To validate the enhanced summarization performance of the proposed AATS-HTDDNN model,a comprehensive experimental analysis was conducted.The comparison study outcomes confirmed the better performance of the AATS-HTDDNN model over other approaches. 展开更多
关键词 Text summarization deep learning denoising deep neural networks hyperparameter tuning Arabic language
在线阅读 下载PDF
PSTCNN: Explainable COVID-19 diagnosis using PSO-guided self-tuning CNN 被引量:3
7
作者 WEI WANG YANRONG PEI +2 位作者 SHUI-HUA WANG JUAN MANUEL GORRZ YU-DONG ZHANG 《BIOCELL》 SCIE 2023年第2期373-384,共12页
Since 2019,the coronavirus disease-19(COVID-19)has been spreading rapidly worldwide,posing an unignorable threat to the global economy and human health.It is a disease caused by severe acute respiratory syndrome coron... Since 2019,the coronavirus disease-19(COVID-19)has been spreading rapidly worldwide,posing an unignorable threat to the global economy and human health.It is a disease caused by severe acute respiratory syndrome coronavirus 2,a single-stranded RNA virus of the genus Betacoronavirus.This virus is highly infectious and relies on its angiotensin-converting enzyme 2-receptor to enter cells.With the increase in the number of confirmed COVID-19 diagnoses,the difficulty of diagnosis due to the lack of global healthcare resources becomes increasingly apparent.Deep learning-based computer-aided diagnosis models with high generalisability can effectively alleviate this pressure.Hyperparameter tuning is essential in training such models and significantly impacts their final performance and training speed.However,traditional hyperparameter tuning methods are usually time-consuming and unstable.To solve this issue,we introduce Particle Swarm Optimisation to build a PSO-guided Self-Tuning Convolution Neural Network(PSTCNN),allowing the model to tune hyperparameters automatically.Therefore,the proposed approach can reduce human involvement.Also,the optimisation algorithm can select the combination of hyperparameters in a targeted manner,thus stably achieving a solution closer to the global optimum.Experimentally,the PSTCNN can obtain quite excellent results,with a sensitivity of 93.65%±1.86%,a specificity of 94.32%±2.07%,a precision of 94.30%±2.04%,an accuracy of 93.99%±1.78%,an F1-score of 93.97%±1.78%,Matthews Correlation Coefficient of 87.99%±3.56%,and Fowlkes-Mallows Index of 93.97%±1.78%.Our experiments demonstrate that compared to traditional methods,hyperparameter tuning of the model using an optimisation algorithm is faster and more effective. 展开更多
关键词 COVID-19 SARS-CoV-2 Particle swarm optimisation Convolutional neural network hyperparameters tuning
暂未订购
A Comparative Study of Optimized-LSTM Models Using Tree-Structured Parzen Estimator for Traffic Flow Forecasting in Intelligent Transportation 被引量:1
8
作者 Hamza Murad Khan Anwar Khan +3 位作者 Santos Gracia Villar Luis Alonso DzulLopez Abdulaziz Almaleh Abdullah M.Al-Qahtani 《Computers, Materials & Continua》 2025年第5期3369-3388,共20页
Traffic forecasting with high precision aids Intelligent Transport Systems(ITS)in formulating and optimizing traffic management strategies.The algorithms used for tuning the hyperparameters of the deep learning models... Traffic forecasting with high precision aids Intelligent Transport Systems(ITS)in formulating and optimizing traffic management strategies.The algorithms used for tuning the hyperparameters of the deep learning models often have accurate results at the expense of high computational complexity.To address this problem,this paper uses the Tree-structured Parzen Estimator(TPE)to tune the hyperparameters of the Long Short-term Memory(LSTM)deep learning framework.The Tree-structured Parzen Estimator(TPE)uses a probabilistic approach with an adaptive searching mechanism by classifying the objective function values into good and bad samples.This ensures fast convergence in tuning the hyperparameter values in the deep learning model for performing prediction while still maintaining a certain degree of accuracy.It also overcomes the problem of converging to local optima and avoids timeconsuming random search and,therefore,avoids high computational complexity in prediction accuracy.The proposed scheme first performs data smoothing and normalization on the input data,which is then fed to the input of the TPE for tuning the hyperparameters.The traffic data is then input to the LSTM model with tuned parameters to perform the traffic prediction.The three optimizers:Adaptive Moment Estimation(Adam),Root Mean Square Propagation(RMSProp),and Stochastic Gradient Descend with Momentum(SGDM)are also evaluated for accuracy prediction and the best optimizer is then chosen for final traffic prediction in TPE-LSTM model.Simulation results verify the effectiveness of the proposed model in terms of accuracy of prediction over the benchmark schemes. 展开更多
关键词 Short-term traffic prediction sequential time series prediction TPE tree-structured parzen estimator LSTM hyperparameter tuning hybrid prediction model
在线阅读 下载PDF
Three-Stage Transfer Learning with AlexNet50 for MRI Image Multi-Class Classification with Optimal Learning Rate
9
作者 Suganya Athisayamani A.Robert Singh +1 位作者 Gyanendra Prasad Joshi Woong Cho 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期155-183,共29页
In radiology,magnetic resonance imaging(MRI)is an essential diagnostic tool that provides detailed images of a patient’s anatomical and physiological structures.MRI is particularly effective for detecting soft tissue... In radiology,magnetic resonance imaging(MRI)is an essential diagnostic tool that provides detailed images of a patient’s anatomical and physiological structures.MRI is particularly effective for detecting soft tissue anomalies.Traditionally,radiologists manually interpret these images,which can be labor-intensive and time-consuming due to the vast amount of data.To address this challenge,machine learning,and deep learning approaches can be utilized to improve the accuracy and efficiency of anomaly detection in MRI scans.This manuscript presents the use of the Deep AlexNet50 model for MRI classification with discriminative learning methods.There are three stages for learning;in the first stage,the whole dataset is used to learn the features.In the second stage,some layers of AlexNet50 are frozen with an augmented dataset,and in the third stage,AlexNet50 with an augmented dataset with the augmented dataset.This method used three publicly available MRI classification datasets:Harvard whole brain atlas(HWBA-dataset),the School of Biomedical Engineering of Southern Medical University(SMU-dataset),and The National Institute of Neuroscience and Hospitals brain MRI dataset(NINS-dataset)for analysis.Various hyperparameter optimizers like Adam,stochastic gradient descent(SGD),Root mean square propagation(RMS prop),Adamax,and AdamW have been used to compare the performance of the learning process.HWBA-dataset registers maximum classification performance.We evaluated the performance of the proposed classification model using several quantitative metrics,achieving an average accuracy of 98%. 展开更多
关键词 MRI TUMORS CLASSIFICATION AlexNet50 transfer learning hyperparameter tuning OPTIMIZER
在线阅读 下载PDF
Intelligent Deep Learning Enabled Wild Forest Fire Detection System 被引量:2
10
作者 Ahmed S.Almasoud 《Computer Systems Science & Engineering》 SCIE EI 2023年第2期1485-1498,共14页
The latest advancements in computer vision and deep learning(DL)techniques pave the way to design novel tools for the detection and monitoring of forestfires.In this view,this paper presents an intelligent wild forestfi... The latest advancements in computer vision and deep learning(DL)techniques pave the way to design novel tools for the detection and monitoring of forestfires.In this view,this paper presents an intelligent wild forestfire detec-tion and alarming system using deep learning(IWFFDA-DL)model.The pro-posed IWFFDA-DL technique aims to identify forestfires at earlier stages through integrated sensors.The proposed IWFFDA-DL system includes an Inte-grated sensor system(ISS)combining an array of sensors that acts as the major input source that helps to forecast thefire.Then,the attention based convolution neural network with bidirectional long short term memory(ACNN-BLSTM)model is applied to examine and identify the existence of danger.For hyperpara-meter tuning of the ACNN-BLSTM model,the bacterial foraging optimization(BFO)algorithm is employed and thereby enhances the detection performance.Finally,when thefire is detected,the Global System for Mobiles(GSM)modem transmits messages to the authorities to take required actions.An extensive set of simulations were performed and the results are investigated interms of several aspects.The obtained results highlight the betterment of the IWFFDA-DL techni-que interms of various measures. 展开更多
关键词 Forestfire deep learning intelligent models metaheuristics integrated sensor system hyperparameter tuning
在线阅读 下载PDF
Deep Learning with Natural Language Processing Enabled Sentimental Analysis on Sarcasm Classification 被引量:2
11
作者 Abdul Rahaman Wahab Sait Mohamad Khairi Ishak 《Computer Systems Science & Engineering》 SCIE EI 2023年第3期2553-2567,共15页
Sentiment analysis(SA)is the procedure of recognizing the emotions related to the data that exist in social networking.The existence of sarcasm in tex-tual data is a major challenge in the efficiency of the SA.Earlier... Sentiment analysis(SA)is the procedure of recognizing the emotions related to the data that exist in social networking.The existence of sarcasm in tex-tual data is a major challenge in the efficiency of the SA.Earlier works on sarcasm detection on text utilize lexical as well as pragmatic cues namely interjection,punctuations,and sentiment shift that are vital indicators of sarcasm.With the advent of deep-learning,recent works,leveraging neural networks in learning lexical and contextual features,removing the need for handcrafted feature.In this aspect,this study designs a deep learning with natural language processing enabled SA(DLNLP-SA)technique for sarcasm classification.The proposed DLNLP-SA technique aims to detect and classify the occurrence of sarcasm in the input data.Besides,the DLNLP-SA technique holds various sub-processes namely preprocessing,feature vector conversion,and classification.Initially,the pre-processing is performed in diverse ways such as single character removal,multi-spaces removal,URL removal,stopword removal,and tokenization.Secondly,the transformation of feature vectors takes place using the N-gram feature vector technique.Finally,mayfly optimization(MFO)with multi-head self-attention based gated recurrent unit(MHSA-GRU)model is employed for the detection and classification of sarcasm.To verify the enhanced outcomes of the DLNLP-SA model,a comprehensive experimental investigation is performed on the News Headlines Dataset from Kaggle Repository and the results signified the supremacy over the existing approaches. 展开更多
关键词 Sentiment analysis sarcasm detection deep learning natural language processing N-GRAMS hyperparameter tuning
在线阅读 下载PDF
Intelligent Deep Learning Enabled Human Activity Recognition for Improved Medical Services 被引量:2
12
作者 E.Dhiravidachelvi M.Suresh Kumar +4 位作者 L.D.Vijay Anand D.Pritima Seifedine Kadry Byeong-Gwon Kang Yunyoung Nam 《Computer Systems Science & Engineering》 SCIE EI 2023年第2期961-977,共17页
Human Activity Recognition(HAR)has been made simple in recent years,thanks to recent advancements made in Artificial Intelligence(AI)techni-ques.These techniques are applied in several areas like security,surveillance,... Human Activity Recognition(HAR)has been made simple in recent years,thanks to recent advancements made in Artificial Intelligence(AI)techni-ques.These techniques are applied in several areas like security,surveillance,healthcare,human-robot interaction,and entertainment.Since wearable sensor-based HAR system includes in-built sensors,human activities can be categorized based on sensor values.Further,it can also be employed in other applications such as gait diagnosis,observation of children/adult’s cognitive nature,stroke-patient hospital direction,Epilepsy and Parkinson’s disease examination,etc.Recently-developed Artificial Intelligence(AI)techniques,especially Deep Learning(DL)models can be deployed to accomplish effective outcomes on HAR process.With this motivation,the current research paper focuses on designing Intelligent Hyperparameter Tuned Deep Learning-based HAR(IHPTDL-HAR)technique in healthcare environment.The proposed IHPTDL-HAR technique aims at recogniz-ing the human actions in healthcare environment and helps the patients in mana-ging their healthcare service.In addition,the presented model makes use of Hierarchical Clustering(HC)-based outlier detection technique to remove the out-liers.IHPTDL-HAR technique incorporates DL-based Deep Belief Network(DBN)model to recognize the activities of users.Moreover,Harris Hawks Opti-mization(HHO)algorithm is used for hyperparameter tuning of DBN model.Finally,a comprehensive experimental analysis was conducted upon benchmark dataset and the results were examined under different aspects.The experimental results demonstrate that the proposed IHPTDL-HAR technique is a superior per-former compared to other recent techniques under different measures. 展开更多
关键词 Artificial intelligence human activity recognition deep learning deep belief network hyperparameter tuning healthcare
在线阅读 下载PDF
Deep Learning Enabled Computer Aided Diagnosis Model for Lung Cancer using Biomedical CT Images 被引量:1
13
作者 Mohammad Alamgeer Hanan Abdullah Mengash +5 位作者 Radwa Marzouk Mohamed K Nour Anwer Mustafa Hilal Abdelwahed Motwakel Abu Sarwar Zamani Mohammed Rizwanullah 《Computers, Materials & Continua》 SCIE EI 2022年第10期1437-1448,共12页
Early detection of lung cancer can help for improving the survival rate of the patients.Biomedical imaging tools such as computed tomography(CT)image was utilized to the proper identification and positioning of lung c... Early detection of lung cancer can help for improving the survival rate of the patients.Biomedical imaging tools such as computed tomography(CT)image was utilized to the proper identification and positioning of lung cancer.The recently developed deep learning(DL)models can be employed for the effectual identification and classification of diseases.This article introduces novel deep learning enabled CAD technique for lung cancer using biomedical CT image,named DLCADLC-BCT technique.The proposed DLCADLC-BCT technique intends for detecting and classifying lung cancer using CT images.The proposed DLCADLC-BCT technique initially uses gray level co-occurrence matrix(GLCM)model for feature extraction.Also,long short term memory(LSTM)model was applied for classifying the existence of lung cancer in the CT images.Moreover,moth swarm optimization(MSO)algorithm is employed to optimally choose the hyperparameters of the LSTM model such as learning rate,batch size,and epoch count.For demonstrating the improved classifier results of the DLCADLC-BCT approach,a set of simulations were executed on benchmark dataset and the outcomes exhibited the supremacy of the DLCADLC-BCT technique over the recent approaches. 展开更多
关键词 Biomedical images lung cancer deep learning machine learning metaheuristics hyperparameter tuning
在线阅读 下载PDF
Attention-Based Deep Learning Model for Early Detection of Parkinson’s Disease 被引量:1
14
作者 Mohd Sadiq Mohd Tauheed Khan Sarfaraz Masood 《Computers, Materials & Continua》 SCIE EI 2022年第6期5183-5200,共18页
Parkinson’s disease(PD),classified under the category of a neurological syndrome,affects the brain of a person which leads to the motor and non-motor symptoms.Among motor symptoms,one of the major disabling symptom i... Parkinson’s disease(PD),classified under the category of a neurological syndrome,affects the brain of a person which leads to the motor and non-motor symptoms.Among motor symptoms,one of the major disabling symptom is Freezing of Gait(FoG)that affects the daily standard of living of PD patients.Available treatments target to improve the symptoms of PD.Detection of PD at the early stages is an arduous task due to being indistinguishable from a healthy individual.This work proposed a novel attention-basedmodel for the detection of FoG events and PD,andmeasuring the intensity of PD on the United Parkinson’s Disease Rating Scale.Two separate datasets,that is,UCF Daphnet dataset for detection of Freezing of Gait Events and PhysioNet Gait in PD Dataset were used for training and validating on their respective problems.The results show a definite rise in the various performance metrics when compared to landmark models on these problems using these datasets.These results strongly suggest that the proposed state of the art attention-based deep learning model provide a consistent as well as an efficient solution to the selected problem.High valueswere obtained for various performance metrics like accuracy of 98.74%for detection FoG,98.72%for detection of PD and 98.05%for measuring the intensity of PD on UPDRS.The model was also analyzed for robustness against noisy samples,where also model exhibited consistent performance.These results strongly suggest that the proposed model provides a better classification method for selected problem. 展开更多
关键词 Parkinson’s disease freezing of gait the attention mechanism hyperparameter tuning attentive-FoGPDNet
暂未订购
Modeling of Sensor Enabled IrrigationManagement for Intelligent Agriculture Using Hybrid Deep Belief Network 被引量:1
15
作者 Saud Yonbawi Sultan Alahmari +5 位作者 B.R.S.S.Raju Chukka Hari Govinda Rao Mohamad Khairi Ishak Hend Khalid Alkahtani JoséVarela-Aldás Samih M.Mostafa 《Computer Systems Science & Engineering》 SCIE EI 2023年第8期2319-2335,共17页
Artificial intelligence(AI)technologies and sensors have recently received significant interest in intellectual agriculture.Accelerating the application of AI technologies and agriculture sensors in intellectual agric... Artificial intelligence(AI)technologies and sensors have recently received significant interest in intellectual agriculture.Accelerating the application of AI technologies and agriculture sensors in intellectual agriculture is urgently required for the growth of modern agriculture and will help promote smart agriculture.Automatic irrigation scheduling systems were highly required in the agricultural field due to their capability to manage and save water deficit irrigation techniques.Automatic learning systems devise an alternative to conventional irrigation management through the automatic elaboration of predictions related to the learning of an agronomist.With this motivation,this study develops a modified black widow optimization with a deep belief network-based smart irrigation system(MBWODBN-SIS)for intelligent agriculture.The MBWODBN-SIS algorithm primarily enables the Internet of Things(IoT)based sensors to collect data forwarded to the cloud server for examination purposes.Besides,the MBWODBN-SIS technique applies the deep belief network(DBN)model for different types of irrigation classification:average,high needed,highly not needed,and not needed.The MBWO algorithm is used for the hyperparameter tuning process.A wideranging experiment was conducted,and the comparison study stated the enhanced outcomes of the MBWODBN-SIS approach to other DL models with maximum accuracy of 95.73%. 展开更多
关键词 AGRICULTURE smart farming hyperparameter tuning artificial intelligence irrigation management SENSORS deep learning
在线阅读 下载PDF
Modeling of Optimal Deep Learning Based Flood Forecasting Model Using Twitter Data 被引量:1
16
作者 G.Indra N.Duraipandian 《Intelligent Automation & Soft Computing》 SCIE 2023年第2期1455-1470,共16页
Aflood is a significant damaging natural calamity that causes loss of life and property.Earlier work on the construction offlood prediction models intended to reduce risks,suggest policies,reduce mortality,and limit prop... Aflood is a significant damaging natural calamity that causes loss of life and property.Earlier work on the construction offlood prediction models intended to reduce risks,suggest policies,reduce mortality,and limit property damage caused byfloods.The massive amount of data generated by social media platforms such as Twitter opens the door toflood analysis.Because of the real-time nature of Twitter data,some government agencies and authorities have used it to track natural catastrophe events in order to build a more rapid rescue strategy.However,due to the shorter duration of Tweets,it is difficult to construct a perfect prediction model for determiningflood.Machine learning(ML)and deep learning(DL)approaches can be used to statistically developflood prediction models.At the same time,the vast amount of Tweets necessitates the use of a big data analytics(BDA)tool forflood prediction.In this regard,this work provides an optimal deep learning-basedflood forecasting model with big data analytics(ODLFF-BDA)based on Twitter data.The suggested ODLFF-BDA technique intends to anticipate the existence offloods using tweets in a big data setting.The ODLFF-BDA technique comprises data pre-processing to convert the input tweets into a usable format.In addition,a Bidirectional Encoder Representations from Transformers(BERT)model is used to generate emotive contextual embed-ding from tweets.Furthermore,a gated recurrent unit(GRU)with a Multilayer Convolutional Neural Network(MLCNN)is used to extract local data and predict theflood.Finally,an Equilibrium Optimizer(EO)is used tofine-tune the hyper-parameters of the GRU and MLCNN models in order to increase prediction performance.The memory usage is pull down lesser than 3.5 MB,if its compared with the other algorithm techniques.The ODLFF-BDA technique’s performance was validated using a benchmark Kaggle dataset,and thefindings showed that it outperformed other recent approaches significantly. 展开更多
关键词 Big data analytics predictive models deep learning flood prediction twitter data hyperparameter tuning
在线阅读 下载PDF
Harris Hawks Optimizer with Graph Convolutional Network Based Weed Detection in Precision Agriculture 被引量:1
17
作者 Saud Yonbawi Sultan Alahmari +4 位作者 T.Satyanarayana Murthy Padmakar Maddala E.Laxmi Lydia Seifedine Kadry Jungeun Kim 《Computer Systems Science & Engineering》 SCIE EI 2023年第8期1533-1547,共15页
Precision agriculture includes the optimum and adequate use of resources depending on several variables that govern crop yield.Precision agriculture offers a novel solution utilizing a systematic technique for current... Precision agriculture includes the optimum and adequate use of resources depending on several variables that govern crop yield.Precision agriculture offers a novel solution utilizing a systematic technique for current agricultural problems like balancing production and environmental concerns.Weed control has become one of the significant problems in the agricultural sector.In traditional weed control,the entire field is treated uniformly by spraying the soil,a single herbicide dose,weed,and crops in the same way.For more precise farming,robots could accomplish targeted weed treatment if they could specifically find the location of the dispensable plant and identify the weed type.This may lessen by large margin utilization of agrochemicals on agricultural fields and favour sustainable agriculture.This study presents a Harris Hawks Optimizer with Graph Convolutional Network based Weed Detection(HHOGCN-WD)technique for Precision Agriculture.The HHOGCN-WD technique mainly focuses on identifying and classifying weeds for precision agriculture.For image pre-processing,the HHOGCN-WD model utilizes a bilateral normal filter(BNF)for noise removal.In addition,coupled convolutional neural network(CCNet)model is utilized to derive a set of feature vectors.To detect and classify weed,the GCN model is utilized with the HHO algorithm as a hyperparameter optimizer to improve the detection performance.The experimental results of the HHOGCN-WD technique are investigated under the benchmark dataset.The results indicate the promising performance of the presented HHOGCN-WD model over other recent approaches,with increased accuracy of 99.13%. 展开更多
关键词 Weed detection precision agriculture graph convolutional network harris hawks optimizer hyperparameter tuning
在线阅读 下载PDF
Electroencephalography (EEG) Based Neonatal Sleep Staging and Detection Using Various Classification Algorithms
18
作者 Hafza Ayesha Siddiqa Muhammad Irfan +1 位作者 Saadullah Farooq Abbasi Wei Chen 《Computers, Materials & Continua》 SCIE EI 2023年第11期1759-1778,共20页
Automatic sleep staging of neonates is essential for monitoring their brain development and maturity of the nervous system.EEG based neonatal sleep staging provides valuable information about an infant’s growth and h... Automatic sleep staging of neonates is essential for monitoring their brain development and maturity of the nervous system.EEG based neonatal sleep staging provides valuable information about an infant’s growth and health,but is challenging due to the unique characteristics of EEG and lack of standardized protocols.This study aims to develop and compare 18 machine learning models using Automated Machine Learning(autoML)technique for accurate and reliable multi-channel EEG-based neonatal sleep-wake classification.The study investigates autoML feasibility without extensive manual selection of features or hyperparameter tuning.The data is obtained from neonates at post-menstrual age 37±05 weeks.352530-s EEG segments from 19 infants are used to train and test the proposed models.There are twelve time and frequency domain features extracted from each channel.Each model receives the common features of nine channels as an input vector of size 108.Each model’s performance was evaluated based on a variety of evaluation metrics.The maximum mean accuracy of 84.78%and kappa of 69.63%has been obtained by the AutoML-based Random Forest estimator.This is the highest accuracy for EEG-based sleep-wake classification,until now.While,for the AutoML-based Adaboost Random Forest model,accuracy and kappa were 84.59%and 69.24%,respectively.High performance achieved in the proposed autoML-based approach can facilitate early identification and treatment of sleep-related issues in neonates. 展开更多
关键词 AutoML Random Forest adaboost EEG NEONATES PSG hyperparameter tuning sleep-wake classification
暂未订购
Credit Card Fraud Detection Using Improved Deep Learning Models
19
作者 Sumaya S.Sulaiman Ibraheem Nadher Sarab M.Hameed 《Computers, Materials & Continua》 SCIE EI 2024年第1期1049-1069,共21页
Fraud of credit cards is a major issue for financial organizations and individuals.As fraudulent actions become more complex,a demand for better fraud detection systems is rising.Deep learning approaches have shown pr... Fraud of credit cards is a major issue for financial organizations and individuals.As fraudulent actions become more complex,a demand for better fraud detection systems is rising.Deep learning approaches have shown promise in several fields,including detecting credit card fraud.However,the efficacy of these models is heavily dependent on the careful selection of appropriate hyperparameters.This paper introduces models that integrate deep learning models with hyperparameter tuning techniques to learn the patterns and relationships within credit card transaction data,thereby improving fraud detection.Three deep learning models:AutoEncoder(AE),Convolution Neural Network(CNN),and Long Short-Term Memory(LSTM)are proposed to investigate how hyperparameter adjustment impacts the efficacy of deep learning models used to identify credit card fraud.The experiments conducted on a European credit card fraud dataset using different hyperparameters and three deep learning models demonstrate that the proposed models achieve a tradeoff between detection rate and precision,leading these models to be effective in accurately predicting credit card fraud.The results demonstrate that LSTM significantly outperformed AE and CNN in terms of accuracy(99.2%),detection rate(93.3%),and area under the curve(96.3%).These proposed models have surpassed those of existing studies and are expected to make a significant contribution to the field of credit card fraud detection. 展开更多
关键词 Card fraud detection hyperparameter tuning deep learning autoencoder convolution neural network long short-term memory RESAMPLING
在线阅读 下载PDF
Data Mining with Comprehensive Oppositional Based Learning for Rainfall Prediction
20
作者 Mohammad Alamgeer Amal Al-Rasheed +3 位作者 Ahmad Alhindi Manar Ahmed Hamza Abdelwahed Motwakel Mohamed I.Eldesouki 《Computers, Materials & Continua》 SCIE EI 2023年第2期2725-2738,共14页
Data mining process involves a number of steps fromdata collection to visualization to identify useful data from massive data set.the same time,the recent advances of machine learning(ML)and deep learning(DL)models ca... Data mining process involves a number of steps fromdata collection to visualization to identify useful data from massive data set.the same time,the recent advances of machine learning(ML)and deep learning(DL)models can be utilized for effectual rainfall prediction.With this motivation,this article develops a novel comprehensive oppositionalmoth flame optimization with deep learning for rainfall prediction(COMFO-DLRP)Technique.The proposed CMFO-DLRP model mainly intends to predict the rainfall and thereby determine the environmental changes.Primarily,data pre-processing and correlation matrix(CM)based feature selection processes are carried out.In addition,deep belief network(DBN)model is applied for the effective prediction of rainfall data.Moreover,COMFO algorithm was derived by integrating the concepts of comprehensive oppositional based learning(COBL)with traditional MFO algorithm.Finally,the COMFO algorithm is employed for the optimal hyperparameter selection of the DBN model.For demonstrating the improved outcomes of the COMFO-DLRP approach,a sequence of simulations were carried out and the outcomes are assessed under distinct measures.The simulation outcome highlighted the enhanced outcomes of the COMFO-DLRP method on the other techniques. 展开更多
关键词 Data mining rainfall prediction deep learning correlation matrix hyperparameter tuning metaheuristics
在线阅读 下载PDF
上一页 1 2 3 下一页 到第
使用帮助 返回顶部