期刊文献+
共找到24篇文章
< 1 2 >
每页显示 20 50 100
Power Information System Database Cache Model Based on Deep Machine Learning
1
作者 Manjiang Xing 《Intelligent Automation & Soft Computing》 SCIE 2023年第7期1081-1090,共10页
At present,the database cache model of power information system has problems such as slow running speed and low database hit rate.To this end,this paper proposes a database cache model for power information systems ba... At present,the database cache model of power information system has problems such as slow running speed and low database hit rate.To this end,this paper proposes a database cache model for power information systems based on deep machine learning.The caching model includes program caching,Structured Query Language(SQL)preprocessing,and core caching modules.Among them,the method to improve the efficiency of the statement is to adjust operations such as multi-table joins and replacement keywords in the SQL optimizer.Build predictive models using boosted regression trees in the core caching module.Generate a series of regression tree models using machine learning algorithms.Analyze the resource occupancy rate in the power information system to dynamically adjust the voting selection of the regression tree.At the same time,the voting threshold of the prediction model is dynamically adjusted.By analogy,the cache model is re-initialized.The experimental results show that the model has a good cache hit rate and cache efficiency,and can improve the data cache performance of the power information system.It has a high hit rate and short delay time,and always maintains a good hit rate even under different computer memory;at the same time,it only occupies less space and less CPU during actual operation,which is beneficial to power The information system operates efficiently and quickly. 展开更多
关键词 deep machine learning power information system DATABASE cache model
在线阅读 下载PDF
Developing a diagnostic support system for audiogram interpretation using deep learning-based object detection
2
作者 Titipat Achakulvisut Suchanon Phanthong +4 位作者 Thanawut Timpitak Kanpat Vesessook Sirinan Junthong Withita Utainrat Kanokrat Bunnag 《Journal of Otology》 2025年第1期26-32,共7页
Objective To develop and evaluate an automated system for digitizing audiograms,classifying hearing loss levels,and comparing their performance with traditional methods and otolaryngologists'interpretations.Design... Objective To develop and evaluate an automated system for digitizing audiograms,classifying hearing loss levels,and comparing their performance with traditional methods and otolaryngologists'interpretations.Designed and Methods We conducted a retrospective diagnostic study using 1,959 audiogram images from patients aged 7 years and older at the Faculty of Medicine,Vajira Hospital,Navamindradhiraj University.We employed an object detection approach to digitize audiograms and developed multiple machine learning models to classify six hearing loss levels.The dataset was split into 70%training(1,407 images)and 30%testing(352 images)sets.We compared our model's performance with classifications based on manually extracted audiogram values and otolaryngologists'interpretations.Result Our object detection-based model achieved an F1-score of 94.72%in classifying hearing loss levels,comparable to the 96.43%F1-score obtained using manually extracted values.The Light Gradient Boosting Machine(LGBM)model is used as the classifier for the manually extracted data,which achieved top performance with 94.72%accuracy,94.72%f1-score,94.72 recall,and 94.72 precision.In object detection based model,The Random Forest Classifier(RFC)model showed the highest 96.43%accuracy in predicting hearing loss level,with a F1-score of 96.43%,recall of 96.43%,and precision of 96.45%.Conclusion Our proposed automated approach for audiogram digitization and hearing loss classification performs comparably to traditional methods and otolaryngologists'interpretations.This system can potentially assist otolaryngologists in providing more timely and effective treatment by quickly and accurately classifying hearing loss. 展开更多
关键词 AUDIOGRAM deep machine learning Training set Validation set Testing set Automatic machine learning(AutoML) Random Forest Classifier(RFC) Support Vector machine(SVM) XGBoost
在线阅读 下载PDF
Human Interaction Recognition in Surveillance Videos Using Hybrid Deep Learning and Machine Learning Models
3
作者 Vesal Khean Chomyong Kim +5 位作者 Sunjoo Ryu Awais Khan Min Kyung Hong Eun Young Kim Joungmin Kim Yunyoung Nam 《Computers, Materials & Continua》 SCIE EI 2024年第10期773-787,共15页
Human Interaction Recognition(HIR)was one of the challenging issues in computer vision research due to the involvement of multiple individuals and their mutual interactions within video frames generated from their mov... Human Interaction Recognition(HIR)was one of the challenging issues in computer vision research due to the involvement of multiple individuals and their mutual interactions within video frames generated from their movements.HIR requires more sophisticated analysis than Human Action Recognition(HAR)since HAR focuses solely on individual activities like walking or running,while HIR involves the interactions between people.This research aims to develop a robust system for recognizing five common human interactions,such as hugging,kicking,pushing,pointing,and no interaction,from video sequences using multiple cameras.In this study,a hybrid Deep Learning(DL)and Machine Learning(ML)model was employed to improve classification accuracy and generalizability.The dataset was collected in an indoor environment with four-channel cameras capturing the five types of interactions among 13 participants.The data was processed using a DL model with a fine-tuned ResNet(Residual Networks)architecture based on 2D Convolutional Neural Network(CNN)layers for feature extraction.Subsequently,machine learning models were trained and utilized for interaction classification using six commonly used ML algorithms,including SVM,KNN,RF,DT,NB,and XGBoost.The results demonstrate a high accuracy of 95.45%in classifying human interactions.The hybrid approach enabled effective learning,resulting in highly accurate performance across different interaction types.Future work will explore more complex scenarios involving multiple individuals based on the application of this architecture. 展开更多
关键词 Convolutional neural network deep learning human interaction recognition ResNet skeleton joint key points human pose estimation hybrid deep learning and machine learning
在线阅读 下载PDF
Water resource forecasting with machine learning and deep learning:A scientometric analysis
4
作者 Chanjuan Liu Jing Xu +2 位作者 Xi’an Li Zhongyao Yu Jinran Wu 《Artificial Intelligence in Geosciences》 2024年第1期220-231,共12页
Water prediction plays a crucial role in modern-day water resource management,encompassing both logical hydro-patterns and demand forecasts.To gain insights into its current focus,status,and emerging themes,this study... Water prediction plays a crucial role in modern-day water resource management,encompassing both logical hydro-patterns and demand forecasts.To gain insights into its current focus,status,and emerging themes,this study analyzed 876 articles published between 2015 and 2022,retrieved from the Web of Science database.Leveraging CiteSpace visualization software,bibliometric techniques,and literature review methodologies,the investigation identified essential literature related to water prediction using machine learning and deep learning approaches.Through a comprehensive analysis,the study identified significant countries,institutions,authors,journals,and keywords in this field.By exploring this data,the research mapped out prevailing trends and cutting-edge areas,providing valuable insights for researchers and practitioners involved in water prediction through machine learning and deep learning.The study aims to guide future inquiries by highlighting key research domains and emerging areas of interest. 展开更多
关键词 Water forecasting machine learning/deep learning Web of Science VISUALIZATION
在线阅读 下载PDF
Project Assessment in Offshore Software Maintenance Outsourcing Using Deep Extreme Learning Machines
5
作者 Atif Ikram Masita Abdul Jalil +6 位作者 Amir Bin Ngah Saqib Raza Ahmad Salman Khan Yasir Mahmood Nazri Kama Azri Azmi Assad Alzayed 《Computers, Materials & Continua》 SCIE EI 2023年第1期1871-1886,共16页
Software maintenance is the process of fixing,modifying,and improving software deliverables after they are delivered to the client.Clients can benefit from offshore software maintenance outsourcing(OSMO)in different w... Software maintenance is the process of fixing,modifying,and improving software deliverables after they are delivered to the client.Clients can benefit from offshore software maintenance outsourcing(OSMO)in different ways,including time savings,cost savings,and improving the software quality and value.One of the hardest challenges for the OSMO vendor is to choose a suitable project among several clients’projects.The goal of the current study is to recommend a machine learning-based decision support system that OSMO vendors can utilize to forecast or assess the project of OSMO clients.The projects belong to OSMO vendors,having offices in developing countries while providing services to developed countries.In the current study,Extreme Learning Machine’s(ELM’s)variant called Deep Extreme Learning Machines(DELMs)is used.A novel dataset consisting of 195 projects data is proposed to train the model and to evaluate the overall efficiency of the proposed model.The proposed DELM’s based model evaluations achieved 90.017%training accuracy having a value with 1.412×10^(-3) Root Mean Square Error(RMSE)and 85.772%testing accuracy with 1.569×10^(-3) RMSE with five DELMs hidden layers.The results express that the suggested model has gained a notable recognition rate in comparison to any previous studies.The current study also concludes DELMs as the most applicable and useful technique for OSMO client’s project assessment. 展开更多
关键词 Software outsourcing deep extreme learning machine(DELM) machine learning(ML) extreme learning machine ASSESSMENT
在线阅读 下载PDF
Reducing Dataset Specificity for Deepfakes Using Ensemble Learning 被引量:1
6
作者 Qaiser Abbas Turki Alghamdi +4 位作者 Yazed Alsaawy Tahir Alyas Ali Alzahrani Khawar Iqbal Malik Saira Bibi 《Computers, Materials & Continua》 SCIE EI 2023年第2期4261-4276,共16页
The emergence of deep fake videos in recent years has made image falsification a real danger.A person’s face and emotions are deep-faked in a video or speech and are substituted with a different face or voice employi... The emergence of deep fake videos in recent years has made image falsification a real danger.A person’s face and emotions are deep-faked in a video or speech and are substituted with a different face or voice employing deep learning to analyze speech or emotional content.Because of how clever these videos are frequently,Manipulation is challenging to spot.Social media are the most frequent and dangerous targets since they are weak outlets that are open to extortion or slander a human.In earlier times,it was not so easy to alter the videos,which required expertise in the domain and time.Nowadays,the generation of fake videos has become easier and with a high level of realism in the video.Deepfakes are forgeries and altered visual data that appear in still photos or video footage.Numerous automatic identification systems have been developed to solve this issue,however they are constrained to certain datasets and performpoorly when applied to different datasets.This study aims to develop an ensemble learning model utilizing a convolutional neural network(CNN)to handle deepfakes or Face2Face.We employed ensemble learning,a technique combining many classifiers to achieve higher prediction performance than a single classifier,boosting themodel’s accuracy.The performance of the generated model is evaluated on Face Forensics.This work is about building a new powerful model for automatically identifying deep fake videos with the DeepFake-Detection-Challenges(DFDC)dataset.We test our model using the DFDC,one of the most difficult datasets and get an accuracy of 96%. 展开更多
关键词 deep machine learning deep fake CNN DFDC ensemble learning
在线阅读 下载PDF
Data Fusion-Based Machine Learning Architecture for Intrusion Detection
7
作者 Muhammad Adnan Khan Taher M.Ghazal +1 位作者 Sang-Woong Lee Abdur Rehman 《Computers, Materials & Continua》 SCIE EI 2022年第2期3399-3413,共15页
In recent years,the infrastructure of Wireless Internet of Sensor Networks(WIoSNs)has been more complicated owing to developments in the internet and devices’connectivity.To effectively prepare,control,hold and optim... In recent years,the infrastructure of Wireless Internet of Sensor Networks(WIoSNs)has been more complicated owing to developments in the internet and devices’connectivity.To effectively prepare,control,hold and optimize wireless sensor networks,a better assessment needs to be conducted.The field of artificial intelligence has made a great deal of progress with deep learning systems and these techniques have been used for data analysis.This study investigates the methodology of Real Time Sequential Deep Extreme LearningMachine(RTS-DELM)implemented to wireless Internet of Things(IoT)enabled sensor networks for the detection of any intrusion activity.Data fusion is awell-knownmethodology that can be beneficial for the improvement of data accuracy,as well as for the maximizing of wireless sensor networks lifespan.We also suggested an approach that not only makes the casting of parallel data fusion network but also render their computations more effective.By using the Real Time Sequential Deep Extreme Learning Machine(RTSDELM)methodology,an excessive degree of reliability with a minimal error rate of any intrusion activity in wireless sensor networks is accomplished.Simulation results show that wireless sensor networks are optimized effectively to monitor and detect any malicious or intrusion activity through this proposed approach.Eventually,threats and a more general outlook are explored. 展开更多
关键词 Wireless internet of sensor networks machine learning deep extreme learning machine artificial intelligence data fusion
在线阅读 下载PDF
Cybersecurity and Cyber Forensics: Machine Learning Approach Systematic Review
8
作者 Ibrahim Goni Jerome MGumpy +1 位作者 Timothy UMaigari Murtala Mohammad 《Semiconductor Science and Information Devices》 2020年第2期25-29,共5页
The proliferation of cloud computing and internet of things has led to the connectivity of states and nations(developed and developing countries)worldwide in which global network provide platform for the connection.Di... The proliferation of cloud computing and internet of things has led to the connectivity of states and nations(developed and developing countries)worldwide in which global network provide platform for the connection.Digital forensics is a field of computer security that uses software applications and standard guidelines which support the extraction of evidences from any computer appliances which is perfectly enough for the court of law to use and make a judgment based on the comprehensiveness,authenticity and objectivity of the information obtained.Cybersecurity is of major concerned to the internet users worldwide due to the recent form of attacks,threat,viruses,intrusion among others going on every day among internet of things.However,it is noted that cybersecurity is based on confidentiality,integrity and validity of data.The aim of this work is make a systematic review on the application of machine learning algorithms to cybersecurity and cyber forensics and pave away for further research directions on the application of deep learning,computational intelligence,soft computing to cybersecurity and cyber forensics. 展开更多
关键词 CYBERSECURITY Cyber forensics Cyber space Cyber threat machine learning and deep learning
在线阅读 下载PDF
Role of artificial intelligence in screening and medical imaging of precancerous gastric diseases
9
作者 Sergey M Kotelevets 《World Journal of Clinical Oncology》 2025年第9期115-126,共12页
Serological screening,endoscopic imaging,morphological visual verification of precancerous gastric diseases and changes in the gastric mucosa are the main stages of early detection,accurate diagnosis and preventive tr... Serological screening,endoscopic imaging,morphological visual verification of precancerous gastric diseases and changes in the gastric mucosa are the main stages of early detection,accurate diagnosis and preventive treatment of gastric precancer.Laboratory-serological,endoscopic and histological diagnostics are carried out by medical laboratory technicians,endoscopists,and histologists.Human factors have a very large share of subjectivity.Endoscopists and histologists are guided by the descriptive principle when formulating imaging conclusions.Diagnostic reports from doctors often result in contradictory and mutually exclusive conclusions.Erroneous results of diagnosticians and clinicians have fatal consequences,such as late diagnosis of gastric cancer and high mortality of patients.Effective population serological screening is only possible with the use of machine processing of laboratory test results.Currently,it is possible to replace subjective imprecise description of endoscopic and histological images by a diagnostician with objective,highly sensitive and highly specific visual recognition using convolutional neural networks with deep machine learning.There are many machine learning models to use.All machine learning models have predictive capabilities.Based on predictive models,it is necessary to identify the risk levels of gastric cancer in patients with a very high probability. 展开更多
关键词 Precancerous gastric diseases Atrophic gastritis Serological screening Risk of gastric cancer Medical imaging Artificial intelligence Convolutional neural networks deep machine learning
暂未订购
An improved deep learning model for soybean future price prediction with hybrid data preprocessing strategy
10
作者 Dingya CHEN Hui LIU +1 位作者 Yanfei LI Zhu DUAN 《Frontiers of Agricultural Science and Engineering》 2025年第2期208-230,共23页
The futures trading market is an important part of the financial markets and soybeans are one of the most strategically important crops in the world.How to predict soybean future price is a challenging topic being stu... The futures trading market is an important part of the financial markets and soybeans are one of the most strategically important crops in the world.How to predict soybean future price is a challenging topic being studied by many researchers.This paper proposes a novel hybrid soybean future price prediction model which includes two stages of data preprocessing and deep learning prediction.In the data preprocessing stage,futures price series are decomposed into subsequences using the ICEEMDAN(improved complete ensemble empirical mode decomposition with adaptive noise)method.The Lempel-Ziv complexity determination method was then used to identify and reconstruct high-frequency subsequences.Finally,the high frequency component is decomposed secondarily using variational mode decomposition optimized by beluga whale optimization algorithm.In the deep learning prediction stage,a deep extreme learning machine optimized by the sparrow search algorithm was used to obtain the prediction results of all subseries and reconstructs them to obtain the final soybean future price prediction results.Based on the experimental results of soybean future price markets in China,Italy,and the United States,it was found that the hybrid method proposed provides superior performance in terms of prediction accuracy and robustness. 展开更多
关键词 deep extreme learning machine hybrid data preprocessing optimization algorithm soybean future price prediction
原文传递
Deep kernel extreme learning machine classifier based on the improved sparrow search algorithm
11
作者 Zhao Guangyuan Lei Yu 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 2024年第3期15-29,共15页
In the classification problem,deep kernel extreme learning machine(DKELM)has the characteristics of efficient processing and superior performance,but its parameters optimization is difficult.To improve the classificat... In the classification problem,deep kernel extreme learning machine(DKELM)has the characteristics of efficient processing and superior performance,but its parameters optimization is difficult.To improve the classification accuracy of DKELM,a DKELM algorithm optimized by the improved sparrow search algorithm(ISSA),named as ISSA-DKELM,is proposed in this paper.Aiming at the parameter selection problem of DKELM,the DKELM classifier is constructed by using the optimal parameters obtained by ISSA optimization.In order to make up for the shortcomings of the basic sparrow search algorithm(SSA),the chaotic transformation is first applied to initialize the sparrow position.Then,the position of the discoverer sparrow population is dynamically adjusted.A learning operator in the teaching-learning-based algorithm is fused to improve the position update operation of the joiners.Finally,the Gaussian mutation strategy is added in the later iteration of the algorithm to make the sparrow jump out of local optimum.The experimental results show that the proposed DKELM classifier is feasible and effective,and compared with other classification algorithms,the proposed DKELM algorithm aciheves better test accuracy. 展开更多
关键词 deep kernel extreme learning machine(DKELM) improved sparrow search algorithm(ISSA) CLASSIFIER parameters optimization
原文传递
Deep Capsule Residual Networks for Better Diagnosis Rate in Medical Noisy Images
12
作者 P.S.Arthy A.Kavitha 《Intelligent Automation & Soft Computing》 SCIE 2023年第5期1381-1393,共13页
With the advent of Machine and Deep Learning algorithms,medical image diagnosis has a new perception of diagnosis and clinical treatment.Regret-tably,medical images are more susceptible to capturing noises despite the... With the advent of Machine and Deep Learning algorithms,medical image diagnosis has a new perception of diagnosis and clinical treatment.Regret-tably,medical images are more susceptible to capturing noises despite the peak in intelligent imaging techniques.However,the presence of noise images degrades both the diagnosis and clinical treatment processes.The existing intelligent meth-ods suffer from the deficiency in handling the diverse range of noise in the ver-satile medical images.This paper proposes a novel deep learning network which learns from the substantial extent of noise in medical data samples to alle-viate this challenge.The proposed deep learning architecture exploits the advan-tages of the capsule network,which is used to extract correlation features and combine them with redefined residual features.Additionally,thefinal stage of dense learning is replaced with powerful extreme learning machines to achieve a better diagnosis rate,even for noisy and complex images.Extensive experimen-tation has been conducted using different medical images.Various performances such as Peak-Signal-To-Noise Ratio(PSNR)and Structural-Similarity-Index-Metrics(SSIM)are compared with the existing deep learning architectures.Addi-tionally,a comprehensive analysis of individual algorithms is analyzed.The experimental results prove that the proposed model has outperformed the other existing algorithms by a substantial margin and proved its supremacy over the other learning models. 展开更多
关键词 machine and deep learning algorithm capsule networks residual networks extreme learning machines correlation features
在线阅读 下载PDF
Deep Capsule Residual Networks for Better Diagnosis Rate in Medical Noisy Images
13
作者 P.S.Arthy A.Kavitha 《Intelligent Automation & Soft Computing》 SCIE 2023年第6期2959-2971,共13页
With the advent of Machine and Deep Learning algorithms,medical image diagnosis has a new perception of diagnosis and clinical treatment.Regret-tably,medical images are more susceptible to capturing noises despite the... With the advent of Machine and Deep Learning algorithms,medical image diagnosis has a new perception of diagnosis and clinical treatment.Regret-tably,medical images are more susceptible to capturing noises despite the peak in intelligent imaging techniques.However,the presence of noise images degrades both the diagnosis and clinical treatment processes.The existing intelligent meth-ods suffer from the deficiency in handling the diverse range of noise in the ver-satile medical images.This paper proposes a novel deep learning network which learns from the substantial extent of noise in medical data samples to alle-viate this challenge.The proposed deep learning architecture exploits the advan-tages of the capsule network,which is used to extract correlation features and combine them with redefined residual features.Additionally,the final stage of dense learning is replaced with powerful extreme learning machines to achieve a better diagnosis rate,even for noisy and complex images.Extensive experimen-tation has been conducted using different medical images.Various performances such as Peak-Signal-To-Noise Ratio(PSNR)and Structural-Similarity-Index-Metrics(SSIM)are compared with the existing deep learning architectures.Addi-tionally,a comprehensive analysis of individual algorithms is analyzed.The experimental results prove that the proposed model has outperformed the other existing algorithms by a substantial margin and proved its supremacy over the other learning models. 展开更多
关键词 machine and deep learning algorithm capsule networks residual networks extreme learning machines correlation features
在线阅读 下载PDF
Artificial Intelligence in Traditional Chinese Medicine:Multimodal Fusion and Machine Learning for Enhanced Diagnosis and Treatment Efficacy
14
作者 Jie Wang Yong-mei Liu +4 位作者 Jun Li Hao-qiang He Chao Liu Yi-jie Song Su-ya Ma 《Current Medical Science》 2025年第5期1013-1022,共10页
Artificial intelligence(AI)serves as a key technology in global industrial transformation and technological restructuring and as the core driver of the fourth industrial revolution.Currently,deep learning techniques,s... Artificial intelligence(AI)serves as a key technology in global industrial transformation and technological restructuring and as the core driver of the fourth industrial revolution.Currently,deep learning techniques,such as convolutional neural networks,enable intelligent information collection in fields such as tongue and pulse diagnosis owing to their robust feature-processing capabilities.Natural language processing models,including long short-term memory and transformers,have been applied to traditional Chinese medicine(TCM)for diagnosis,syndrome differentiation,and prescription generation.Traditional machine learning algorithms,such as neural networks,support vector machines,and random forests,are also widely used in TCM diagnosis and treatment because of their strong regression and classification performance on small structured datasets.Future research on AI in TCM diagnosis and treatment may emphasize building large-scale,high-quality TCM datasets with unified criteria based on syndrome elements;identifying algorithms suited to TCM theoretical data distributions;and leveraging AI multimodal fusion and ensemble learning techniques for diverse raw features,such as images,text,and manually processed structured data,to increase the clinical efficacy of TCM diagnosis and treatment. 展开更多
关键词 Artificial intelligence Traditional Chinese medicine machine learning deep learning Syndromic elements Multimodal fusion Ensemble learning Clinical dignosis Prescription generation Clinical Efficacy
在线阅读 下载PDF
Research on the IL-Bagging-DHKELM Short-Term Wind Power Prediction Algorithm Based on Error AP Clustering Analysis
15
作者 Jing Gao Mingxuan Ji +1 位作者 Hongjiang Wang Zhongxiao Du 《Computers, Materials & Continua》 SCIE EI 2024年第6期5017-5030,共14页
With the continuous advancement of China’s“peak carbon dioxide emissions and Carbon Neutrality”process,the proportion of wind power is increasing.In the current research,aiming at the problem that the forecasting m... With the continuous advancement of China’s“peak carbon dioxide emissions and Carbon Neutrality”process,the proportion of wind power is increasing.In the current research,aiming at the problem that the forecasting model is outdated due to the continuous updating of wind power data,a short-term wind power forecasting algorithm based on Incremental Learning-Bagging Deep Hybrid Kernel Extreme Learning Machine(IL-Bagging-DHKELM)error affinity propagation cluster analysis is proposed.The algorithm effectively combines deep hybrid kernel extreme learning machine(DHKELM)with incremental learning(IL).Firstly,an initial wind power prediction model is trained using the Bagging-DHKELM model.Secondly,Euclidean morphological distance affinity propagation AP clustering algorithm is used to cluster and analyze the prediction error of wind power obtained from the initial training model.Finally,the correlation between wind power prediction errors and Numerical Weather Prediction(NWP)data is introduced as incremental updates to the initial wind power prediction model.During the incremental learning process,multiple error performance indicators are used to measure the overall model performance,thereby enabling incremental updates of wind power models.Practical examples show the method proposed in this article reduces the root mean square error of the initial model by 1.9 percentage points,indicating that this method can be better adapted to the current scenario of the continuous increase in wind power penetration rate.The accuracy and precision of wind power generation prediction are effectively improved through the method. 展开更多
关键词 Short-term wind power prediction deep hybrid kernel extreme learning machine incremental learning error clustering
在线阅读 下载PDF
Advancing Type II Diabetes Predictions with a Hybrid LSTM-XGBoost Approach
16
作者 Ayoub Djama Waberi Ronald Waweru Mwangi Richard Maina Rimiru 《Journal of Data Analysis and Information Processing》 2024年第2期163-188,共26页
In this paper, we explore the ability of a hybrid model integrating Long Short-Term Memory (LSTM) networks and eXtreme Gradient Boosting (XGBoost) to enhance the prediction accuracy of Type II Diabetes Mellitus, which... In this paper, we explore the ability of a hybrid model integrating Long Short-Term Memory (LSTM) networks and eXtreme Gradient Boosting (XGBoost) to enhance the prediction accuracy of Type II Diabetes Mellitus, which is caused by a combination of genetic, behavioral, and environmental factors. Utilizing comprehensive datasets from the Women in Data Science (WiDS) Datathon for the years 2020 and 2021, which provide a wide range of patient information required for reliable prediction. The research employs a novel approach by combining LSTM’s ability to analyze sequential data with XGBoost’s strength in handling structured datasets. To prepare this data for analysis, the methodology includes preparing it and implementing the hybrid model. The LSTM model, which excels at processing sequential data, detects temporal patterns and trends in patient history, while XGBoost, known for its classification effectiveness, converts these patterns into predictive insights. Our results demonstrate that the LSTM-XGBoost model can operate effectively with a prediction accuracy achieving 0.99. This study not only shows the usefulness of the hybrid LSTM-XGBoost model in predicting diabetes but it also provides the path for future research. This progress in machine learning applications represents a significant step forward in healthcare, with the potential to alter the treatment of chronic diseases such as diabetes and lead to better patient outcomes. 展开更多
关键词 LSTM XGBoost Hybrid Models machine learning. deep learning
暂未订购
多囊卵巢综合征无排卵的胰岛素信号和雄激素合成的新遗传风险和代谢特征 被引量:2
17
作者 吴效科 黄志超 +24 位作者 曹义娟 李建 李志强 马红丽 高敬书 常惠 张多加 丛晶 王宇 吴奇 Xiaoxiao Han Pui Wah Jacqueline Chung Yiran Li Xu Zheng Lingxi Chen Lin Zeng Astrid Borchert Hartmut Kuhn Zi-Jiang Chen Ernest Hung Yu Ng Elisabet Stener-Victorin 张和平 Richard S.Legro Ben Willem J.Mol 师咏勇 《Engineering》 SCIE EI CAS CSCD 2023年第4期103-111,M0005,M0006,共11页
促排卵是多囊卵巢综合征(PCOS)不孕症的一线治疗方案。卵巢对促排卵治疗的排卵应答差被认为与胰岛素抵抗和高雄激素血症相关。在一个包含1000名PCOS不孕妇女(PCOSAct)的前瞻性队列中,我们开展了一项全外显子联合靶向单核苷酸多态性(SNP... 促排卵是多囊卵巢综合征(PCOS)不孕症的一线治疗方案。卵巢对促排卵治疗的排卵应答差被认为与胰岛素抵抗和高雄激素血症相关。在一个包含1000名PCOS不孕妇女(PCOSAct)的前瞻性队列中,我们开展了一项全外显子联合靶向单核苷酸多态性(SNP)测序以及代谢组学研究。在全基因组水平找出与无排卵显著相关的常见变异和罕见突变,并通过机器学习算法构建排卵预测模型。研究发现,ZNF438基因中标记为rs2994652(p=2.47×10^(-8))的常见变异和REC114基因中的一个罕见功能突变(rs182542888,p=5.79×10^(-6))与促排卵治疗失败显著相关。携带rs2994652 A等位基因和REC114 p.Val101Leu(rs182542888)的PCOS不孕妇女进行促排卵治疗的总排卵率更低(分别为比值比(OR)=1.96,95%置信区间(CI)[1.55~2.49];OR=11.52,95%CI[3.08~43.05]),出现排卵的间隔时间更长(平均56.7天vs.49.0天,p<0.001;78.1天vs.68.6天,p=0.014)。对于rs2994652突变者,L-苯丙氨酸水平升高并与胰岛素抵抗稳态模型(HOMA-IR)指数(r=0.22,p=0.05)和空腹血糖(r=0.33,p=0.003)呈正相关;对于rs182542888突变者,花生四烯酸代谢产物水平下降并与升高的抗苗勒管激素(r=-0.51,p=0.01)和总睾酮(r=-0.71,p=0.02)呈负相关。整合基因变异位点、代谢产物及临床特征的联合预测模型可提高对排卵的预测能力[曲线下面积(AUC)=76.7%]。ZNF438基因的一个常见变异和REC114基因的一个罕见功能突变,以及与二者相关的苯丙氨酸和花生四烯酸代谢物改变,与PCOS女性不孕症的促排卵治疗失败相关。 展开更多
关键词 Polycystic ovary syndrome INFERTILITY Ovulation responses ZNF438 REC114 Whole-exome sequencing deep machine learning
暂未订购
Innovative Fungal Disease Diagnosis System Using Convolutional Neural Network
18
作者 Tahir Alyas Khalid Alissa +3 位作者 Abdul Salam Mohammad Shazia Asif Tauqeer Faiz Gulzar Ahmed 《Computers, Materials & Continua》 SCIE EI 2022年第12期4869-4883,共15页
Fungal disease affects more than a billion people worldwide,resulting in different types of fungus diseases facing life-threatening infections.The outer layer of your body is called the integumentary system.Your skin,... Fungal disease affects more than a billion people worldwide,resulting in different types of fungus diseases facing life-threatening infections.The outer layer of your body is called the integumentary system.Your skin,hair,nails,and glands are all part of it.These organs and tissues serve as your first line of defence against bacteria while protecting you from harm and the sun.The It serves as a barrier between the outside world and the regulated environment inside our bodies and a regulating effect.Heat,light,damage,and illness are all protected by it.Fungi-caused infections are found in almost every part of the natural world.When an invasive fungus takes over a body region and overwhelms the immune system,it causes fungal infections in people.Another primary goal of this study was to create a Convolutional Neural Network(CNN)-based technique for detecting and classifying various types of fungal diseases.There are numerous fungal illnesses,but only two have been identified and classified using the proposed Innovative Fungal Disease Diagnosis(IFDD)system of Candidiasis and Tinea Infections.This paper aims to detect infected skin issues and provide treatment recommendations based on proposed system findings.To identify and categorize fungal infections,deep machine learning techniques are utilized.A CNN architecture was created,and it produced a promising outcome to improve the proposed system accuracy.The collected findings demonstrated that CNN might be used to identify and classify numerous species of fungal spores early and estimate all conceivable fungus hazards.Our CNN-Based can detect fungal diseases through medical images;earmarked IFDD system has a predictive performance of 99.6%accuracy. 展开更多
关键词 deep machine learning CNN ReLU skin disease FUNGAL
在线阅读 下载PDF
Human Being Emotion in Cognitive Intelligent Robotic Control Pt I: Quantum/Soft Computing Approach
19
作者 Alla A.Mamaeva Andrey V.Shevchenko Sergey V.Ulyanov 《Artificial Intelligence Advances》 2020年第1期1-30,共30页
The article consists of two parts.Part I shows the possibility of quantum/soft computing optimizers of knowledge bases(QSCOptKB™)as the toolkit of quantum deep machine learning technology implementation in the solutio... The article consists of two parts.Part I shows the possibility of quantum/soft computing optimizers of knowledge bases(QSCOptKB™)as the toolkit of quantum deep machine learning technology implementation in the solution’s search of intelligent cognitive control tasks applied the cognitive helmet as neurointerface.In particular case,the aim of this part is to demonstrate the possibility of classifying the mental states of a human being operator in on line with knowledge extraction from electroencephalograms based on SCOptKB™and QCOptKB™sophisticated toolkit.Application of soft computing technologies to identify objective indicators of the psychophysiological state of an examined person described.The role and necessity of applying intelligent information technologies development based on computational intelligence toolkits in the task of objective estimation of a general psychophysical state of a human being operator shown.Developed information technology examined with special(difficult in diagnostic practice)examples emotion state estimation of autism children(ASD)and dementia and background of the knowledge bases design for intelligent robot of service use is it.Application of cognitive intelligent control in navigation of autonomous robot for avoidance of obstacles demonstrated. 展开更多
关键词 Neural interface Computational intelligence toolkit Intelligent control system deep machine learning Emotions Quantum soft computing optimizer
在线阅读 下载PDF
An artificial neural network based deep collocation method for the solution of transient linear and nonlinear partial differential equations
20
作者 Abhishek MISHRA Cosmin ANITESCU +3 位作者 Pattabhi Ramaiah BUDARAPU Sundararajan NATARAJAN Pandu Rang VUNDAVILLI Timon RABCZUK 《Frontiers of Structural and Civil Engineering》 SCIE EI CSCD 2024年第8期1296-1310,共15页
A combined deep machine learning(DML)and collocation based approach to solve the partial differential equations using artificial neural networks is proposed.The developed method is applied to solve problems governed b... A combined deep machine learning(DML)and collocation based approach to solve the partial differential equations using artificial neural networks is proposed.The developed method is applied to solve problems governed by the Sine–Gordon equation(SGE),the scalar wave equation and elasto-dynamics.Two methods are studied:one is a space-time formulation and the other is a semi-discrete method based on an implicit Runge–Kutta(RK)time integration.The methodology is implemented using the Tensorflow framework and it is tested on several numerical examples.Based on the results,the relative normalized error was observed to be less than 5%in all cases. 展开更多
关键词 collocation method artificial neural networks deep machine learning Sine-Gordon equation transient wave equation dynamic scalar and elasto-dynamic equation Runge-Kutta method
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部