期刊文献+
共找到267篇文章
< 1 2 14 >
每页显示 20 50 100
A Deep Learning Framework for Heart Disease Prediction with Explainable Artificial Intelligence
1
作者 Muhammad Adil Nadeem Javaid +2 位作者 Imran Ahmed Abrar Ahmed Nabil Alrajeh 《Computers, Materials & Continua》 2026年第1期1944-1963,共20页
Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learni... Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learning(DL)approaches often face several limitations,including inefficient feature extraction,class imbalance,suboptimal classification performance,and limited interpretability,which collectively hinder their deployment in clinical settings.To address these challenges,we propose a novel DL framework for heart disease prediction that integrates a comprehensive preprocessing pipeline with an advanced classification architecture.The preprocessing stage involves label encoding and feature scaling.To address the issue of class imbalance inherent in the personal key indicators of the heart disease dataset,the localized random affine shadowsampling technique is employed,which enhances minority class representation while minimizing overfitting.At the core of the framework lies the Deep Residual Network(DeepResNet),which employs hierarchical residual transformations to facilitate efficient feature extraction and capture complex,non-linear relationships in the data.Experimental results demonstrate that the proposed model significantly outperforms existing techniques,achieving improvements of 3.26%in accuracy,3.16%in area under the receiver operating characteristics,1.09%in recall,and 1.07%in F1-score.Furthermore,robustness is validated using 10-fold crossvalidation,confirming the model’s generalizability across diverse data distributions.Moreover,model interpretability is ensured through the integration of Shapley additive explanations and local interpretable model-agnostic explanations,offering valuable insights into the contribution of individual features to model predictions.Overall,the proposed DL framework presents a robust,interpretable,and clinically applicable solution for heart disease prediction. 展开更多
关键词 Heart disease deep learning localized random affine shadowsampling local interpretable modelagnostic explanations shapley additive explanations 10-fold cross-validation
在线阅读 下载PDF
Inverse Design of Composite Materials Based on Latent Space and Bayesian Optimization
2
作者 Xianrui Lyu Xiaodan Ren 《Computer Modeling in Engineering & Sciences》 2026年第1期1-25,共25页
Inverse design of advanced materials represents a pivotal challenge in materials science.Leveraging the latent space of Variational Autoencoders(VAEs)for material optimization has emerged as a significant advancement ... Inverse design of advanced materials represents a pivotal challenge in materials science.Leveraging the latent space of Variational Autoencoders(VAEs)for material optimization has emerged as a significant advancement in the field of material inverse design.However,VAEs are inherently prone to generating blurred images,posing challenges for precise inverse design and microstructure manufacturing.While increasing the dimensionality of the VAE latent space can mitigate reconstruction blurriness to some extent,it simultaneously imposes a substantial burden on target optimization due to an excessively high search space.To address these limitations,this study adopts a Variational Autoencoder guided Conditional Diffusion Generative Model(VAE-CDGM)framework integrated with Bayesian optimization to achieve the inverse design of composite materials with targeted mechanical properties.The VAE-CDGM model synergizes the strengths of VAEs and Denoising Diffusion Probabilistic Models(DDPM),enabling the generation of high-quality,sharp images while preserving a manipulable latent space.To accommodate varying dimensional requirements of the latent space,two optimization strategies are proposed.When the latent space dimensionality is excessively high,SHapley Additive exPlanations(SHAP)sensitivity analysis is employed to identify critical latent features for optimization within a reduced subspace.Conversely,direct optimization is performed in the low-dimensional latent space of VAE-CDGM when dimensionality is modest.The results demonstrate that both strategies accurately achieve the targeted design of composite materials while circumventing the blurred reconstruction flaws of VAEs,which offers a novel pathway for the precise design of advanced materials. 展开更多
关键词 Variational autoencoder denoising diffusion generation model composite materials Bayesian opti-mization SHapley Additive exPlanations
在线阅读 下载PDF
Multi-source remote sensing and machine learning reveal spatiotemporal variations and drivers of NPP in the Tianshan Mountains,China
3
作者 LI Jiani XU Denghui +2 位作者 XU Zhonglin WANG Yao YANG Jianjun 《Journal of Arid Land》 2026年第1期56-83,共28页
Arid mountain ecosystems are highly sensitive to hydrothermal stress and land use intensification,yet where net primary productivity(NPP)degradation is likely to persist and what drives it remain unclear in the Tiansh... Arid mountain ecosystems are highly sensitive to hydrothermal stress and land use intensification,yet where net primary productivity(NPP)degradation is likely to persist and what drives it remain unclear in the Tianshan Mountains of Northwest China.We integrated multi-source remote sensing with the Carnegie–Ames–Stanford Approach(CASA)model to estimate NPP during 2000–2020,assessed trend persistence using the Hurst exponent,and identified key drivers and nonlinear thresholds with Extreme Gradient Boosting(XGBoost)and SHapley Additive exPlanations(SHAP).Total NPP averaged 55.74 Tg C/a and ranged from 48.07 to 65.91 Tg C/a from 2000 to 2020,while regional mean NPP rose from 138.97 to 160.69 g C/(m^(2)·a).Land use transfer analysis showed that grassland expanded mainly at the expense of unutilized land and that cropland increased overall.Although NPP increased across 64.11%of the region during 2000–2020,persistence analysis suggested that 53.93%of the Tianshan Mountains was prone to continued NPP decline,including 36.41%with significant projected decline and 17.52%with weak projected decline;these areas formed degradation hotspots concentrated in the central and northern Tianshan Mountains.In contrast,potential improvement was limited(strong persistent improvement:4.97%;strong anti-persistent improvement:0.36%).Driver attribution indicated that land use dominated NPP variability(mean absolute SHAP value=29.54%),followed by precipitation(16.03%)and temperature(11.05%).SHAP dependence analyses showed that precipitation effects stabilized at 300.00–400.00 mm,and temperature exhibited an inverted U-shaped response with a peak near 0.00°C.These findings indicated that persistent degradation risk arose from hydrothermal constraints interacting with land use conversion,highlighting the need for threshold-informed,spatially targeted management to sustain carbon sequestration in arid mountain ecosystems. 展开更多
关键词 net primary productivity(NPP) Carnegie-Ames-Stanford Approach(CASA) Hurst exponent land use change Extreme Gradient Boosting(XGBoost) SHapley Additive exPlanations(SHAP) hydrothermal thresholds
在线阅读 下载PDF
利用气象和空气污染因素预测呼吸系统疾病死亡的机器学习应用——以北京市海淀区为例
4
作者 陈剑铭 王晴 +3 位作者 徐鑫 马子昂 伯鑫 李杨 《北京化工大学学报(自然科学版)》 北大核心 2025年第6期1-9,共9页
随着城市化和工业化的发展,日益严峻的空气污染形势和频发的极端天气事件对公共健康构成了重要威胁。本研究旨在评估气象因子及空气污染对呼吸系统疾病死亡的影响。以2014年1月1日至2024年7月31日中国北京市海淀区的气象数据、空气污染... 随着城市化和工业化的发展,日益严峻的空气污染形势和频发的极端天气事件对公共健康构成了重要威胁。本研究旨在评估气象因子及空气污染对呼吸系统疾病死亡的影响。以2014年1月1日至2024年7月31日中国北京市海淀区的气象数据、空气污染物数据和呼吸系统疾病死亡数据为研究数据集,利用随机森林(RF)模型分析气象因素和空气污染物对呼吸系统疾病死亡的影响,并结合SHapley Additive exPlanations(SHAP)开展呼吸系统疾病死亡的影响因素分析。斯皮尔曼相关分析和RF模型结果显示,SO_(2)浓度、NO_(2)浓度、PM_(2.5)和PM_(10)与呼吸系统疾病死亡呈正相关,最低气温与呼吸系统疾病死亡呈负相关,且该模型在冬季展现出较其他季节更优的预测性能。此外,模型的SHAP全局特征结果表明最低气温是影响呼吸系统疾病死亡的最主要因素。研究结果表明RF模型具有预测呼吸系统疾病死亡的潜力,能够有效结合气象和空气污染数据进行呼吸系统疾病的预测,结合SHAP能够进一步提升机器学习模型的可解释性。本研究可为政策制定者科学制定针对性的空气质量控制措施、极端气温健康预警及季节性呼吸系统疾病防控策略提供有力的支撑。 展开更多
关键词 空气污染 气象因素 随机森林 SHapley Additive exPlanations(SHAP) 呼吸系统疾病死亡 相关性分析
暂未订购
融合水文模型与深度学习的青海湖流域径流模拟
5
作者 李娜 赵永 +2 位作者 梁四海 王旭升 万力 《水资源研究》 2025年第5期458-470,共13页
本文聚焦我国青海湖流域的水文过程,基于多年气象和水文动态数据,发展了一种融合概念性水文模型FLEX (FluxExchange)和门控循环单元(Gated Recurrent Unit, GRU)的混合模型对流域内最大支流布哈河的逐日径流进行了模拟和预测。在构建混... 本文聚焦我国青海湖流域的水文过程,基于多年气象和水文动态数据,发展了一种融合概念性水文模型FLEX (FluxExchange)和门控循环单元(Gated Recurrent Unit, GRU)的混合模型对流域内最大支流布哈河的逐日径流进行了模拟和预测。在构建混合模型中,采用了三种策略提升模拟精度:引入差分进化自适应算法DREAM(zs)反演水文参数优化FLEX模型;采用变分模态分解(VMD)提取径流数据的信息和特征;利用麻雀搜索算法(SSA)优化深度学习GRU的参数。研究将FLEX模型的模拟结果连同气象数据一起作为神经网络的输入,从而构建了FLEX-VMD-SSA-GRU混合模型。同时,探讨了不同的气象输入条件对模拟结果的影响和贡献:基于7个主要气象要素,由少及多设置了14组输入情景模拟。最后通过SHAP对深度学习方法的结果进行分析,揭示了气象变量对径流长期趋势的贡献和重要度。 展开更多
关键词 径流模拟 概念性水文模型FLEX 门控循环单元GRU FLEX-VMD-SSA-GRU混合模型 Shapley Additive Explanations (SHAP)
在线阅读 下载PDF
基于深度学习的重质馏分油分子层次组成预测模型 被引量:1
6
作者 袁壮 王源 +6 位作者 杨哲 徐伟 周鑫 赵辉 陈小博 杨朝合 林扬 《石油学报(石油加工)》 北大核心 2025年第2期362-370,共9页
随着工业大数据时代的到来,基于深度学习建立的原油分子组成预测模型具有适用范围广、构建快捷、准确性高等优点。然而,石油馏分分子层次信息标签获取困难,难以满足深度学习模型训练需求。为解决上述问题,基于商业流程模拟软件Aspen HY... 随着工业大数据时代的到来,基于深度学习建立的原油分子组成预测模型具有适用范围广、构建快捷、准确性高等优点。然而,石油馏分分子层次信息标签获取困难,难以满足深度学习模型训练需求。为解决上述问题,基于商业流程模拟软件Aspen HYSYS与GC-MS×MS全二维气相色谱-飞行时间质谱联用仪提出了一种创新方法,建立足够规模的训练数据库。采用深度神经网络(DNN)建立了重质馏分油分子层次结构组成预测模型,该模型以炼油厂易测得的油品物理化学性质为输入,分子层次结构信息为输出,针对某炼油厂的催化裂化原料油进行分子组成预测,通过SHAP(SHapley Additive exPlanation)方法对模型进行可解释分析。结果表明,基于深度学习的重质馏分油分子组成预测模型能够准确地预测油品分子层次结构信息,目标装置原料分子组成预测平均相对误差小于8%。该模型不仅可对其他炼化装置的原料油性质进行软测量,也可为石油分子层次模型的开发提供准确的重油原料分子信息模型。 展开更多
关键词 重质馏分油 分子组成 深度学习 SHapley Additive exPlanation(SHAP)解释 分子管理
在线阅读 下载PDF
Prediction and optimization of flue pressure in sintering process based on SHAP 被引量:4
7
作者 Mingyu Wang Jue Tang +2 位作者 Mansheng Chu Quan Shi Zhen Zhang 《International Journal of Minerals,Metallurgy and Materials》 SCIE EI CAS 2025年第2期346-359,共14页
Sinter is the core raw material for blast furnaces.Flue pressure,which is an important state parameter,affects sinter quality.In this paper,flue pressure prediction and optimization were studied based on the shapley a... Sinter is the core raw material for blast furnaces.Flue pressure,which is an important state parameter,affects sinter quality.In this paper,flue pressure prediction and optimization were studied based on the shapley additive explanation(SHAP)to predict the flue pressure and take targeted adjustment measures.First,the sintering process data were collected and processed.A flue pressure prediction model was then constructed after comparing different feature selection methods and model algorithms using SHAP+extremely random-ized trees(ET).The prediction accuracy of the model within the error range of±0.25 kPa was 92.63%.SHAP analysis was employed to improve the interpretability of the prediction model.The effects of various sintering operation parameters on flue pressure,the relation-ship between the numerical range of key operation parameters and flue pressure,the effect of operation parameter combinations on flue pressure,and the prediction process of the flue pressure prediction model on a single sample were analyzed.A flue pressure optimization module was also constructed and analyzed when the prediction satisfied the judgment conditions.The operating parameter combination was then pushed.The flue pressure was increased by 5.87%during the verification process,achieving a good optimization effect. 展开更多
关键词 sintering process flue pressure shapley additive explanation PREDICTION OPTIMIZATION
在线阅读 下载PDF
基于随机森林与SHAP算法的致密砂岩气暂堵效果的影响因素分析
8
作者 黄浩 车恒达 +3 位作者 孔祥伟 辛富斌 向九洲 吉俊杰 《科学技术与工程》 北大核心 2025年第26期11135-11143,共9页
为深入研究地质因素、分段及射孔参数、压裂施工因素对簇间暂堵效果的影响,通过构建暂堵效果量化模型和公式,收集苏里格区块暂堵井数据76组,融合随机森林和SHAP(Shapley additive explanations)值算法,建立暂堵效果算法模型。经过对暂... 为深入研究地质因素、分段及射孔参数、压裂施工因素对簇间暂堵效果的影响,通过构建暂堵效果量化模型和公式,收集苏里格区块暂堵井数据76组,融合随机森林和SHAP(Shapley additive explanations)值算法,建立暂堵效果算法模型。经过对暂堵效果量化模型和公式、暂堵效果算法模型验证,发现暂堵效果量化值与产气贡献率正相关,P=0.037,证明暂堵效果量化模型和公式的准确性高;又因暂堵效果算法模型中,训练集与测试集的MSE、MAE、R^(2)相差微小,证明该模型的泛化能力较强且准确性高。在暂堵效果算法模型的基础之上,开展暂堵效果的影响因素分析,结果表明:总段数、渗透率、暂堵球数量、簇间距和砂比这5个因素对于暂堵效果的影响占比最大。进一步分析单影响因素,发现随总段数增加,暂堵效果增加的规律只适用于直井,对水平井不适用;随渗透率增加,暂堵效果变差;暂堵球数量<50个、簇间距>20 m、砂比介于18%~20%,暂堵效果均可达到正向增长。研究结果可为苏里格等气田现场暂堵作业设计提供借鉴和参考。 展开更多
关键词 苏里格气田 致密砂岩气 暂堵效果 随机森林 SHAP(Shapley additive explanations)值 模型解释
在线阅读 下载PDF
Construction and validation of machine learning-based predictive model for colorectal polyp recurrence one year after endoscopic mucosal resection 被引量:2
9
作者 Yi-Heng Shi Jun-Liang Liu +5 位作者 Cong-Cong Cheng Wen-Ling Li Han Sun Xi-Liang Zhou Hong Wei Su-Juan Fei 《World Journal of Gastroenterology》 2025年第11期46-62,共17页
BACKGROUND Colorectal polyps are precancerous diseases of colorectal cancer.Early detection and resection of colorectal polyps can effectively reduce the mortality of colorectal cancer.Endoscopic mucosal resection(EMR... BACKGROUND Colorectal polyps are precancerous diseases of colorectal cancer.Early detection and resection of colorectal polyps can effectively reduce the mortality of colorectal cancer.Endoscopic mucosal resection(EMR)is a common polypectomy proce-dure in clinical practice,but it has a high postoperative recurrence rate.Currently,there is no predictive model for the recurrence of colorectal polyps after EMR.AIM To construct and validate a machine learning(ML)model for predicting the risk of colorectal polyp recurrence one year after EMR.METHODS This study retrospectively collected data from 1694 patients at three medical centers in Xuzhou.Additionally,a total of 166 patients were collected to form a prospective validation set.Feature variable screening was conducted using uni-variate and multivariate logistic regression analyses,and five ML algorithms were used to construct the predictive models.The optimal models were evaluated based on different performance metrics.Decision curve analysis(DCA)and SHapley Additive exPlanation(SHAP)analysis were performed to assess clinical applicability and predictor importance.RESULTS Multivariate logistic regression analysis identified 8 independent risk factors for colorectal polyp recurrence one year after EMR(P<0.05).Among the models,eXtreme Gradient Boosting(XGBoost)demonstrated the highest area under the curve(AUC)in the training set,internal validation set,and prospective validation set,with AUCs of 0.909(95%CI:0.89-0.92),0.921(95%CI:0.90-0.94),and 0.963(95%CI:0.94-0.99),respectively.DCA indicated favorable clinical utility for the XGBoost model.SHAP analysis identified smoking history,family history,and age as the top three most important predictors in the model.CONCLUSION The XGBoost model has the best predictive performance and can assist clinicians in providing individualized colonoscopy follow-up recommendations. 展开更多
关键词 Colorectal polyps Machine learning Predictive model Risk factors SHapley Additive exPlanation
暂未订购
Explainable machine learning for predicting mechanical properties of hot-rolled steel pipe 被引量:2
10
作者 Jing-dong Li You-zhao Sun +4 位作者 Xiao-chen Wang Quan Yang Guo-dong Liu Hao-tang Qie Feng-xia Li 《Journal of Iron and Steel Research International》 2025年第8期2475-2490,共16页
Mechanical properties are critical to the quality of hot-rolled steel pipe products.Accurately understanding the relationship between rolling parameters and mechanical properties is crucial for effective prediction an... Mechanical properties are critical to the quality of hot-rolled steel pipe products.Accurately understanding the relationship between rolling parameters and mechanical properties is crucial for effective prediction and control.To address this,an industrial big data platform was developed to collect and process multi-source heterogeneous data from the entire production process,providing a complete dataset for mechanical property prediction.The adaptive bandwidth kernel density estimation(ABKDE)method was proposed to adjust bandwidth dynamically based on data density.Combining long short-term memory neural networks with ABKDE offers robust prediction interval capabilities for mechanical properties.The proposed method was deployed in a large-scale steel plant,which demonstrated superior prediction interval performance compared to lower upper bound estimation,mean variance estimation,and extreme learning machine-adaptive bandwidth kernel density estimation,achieving a prediction interval normalized average width of 0.37,a prediction interval coverage probability of 0.94,and the lowest coverage width-based criterion of 1.35.Notably,shapley additive explanations-based explanations significantly improved the proposed model’s credibility by providing a clear analysis of feature impacts. 展开更多
关键词 Mechanical property Hot-rolled steel pipe Machine learning Adaptive bandwidth kernel density estimation Shapley additive explanations-based explanation
原文传递
基于机器学习的铜电解精炼电积过程电压及出液铜离子浓度预测模型研究
11
作者 闫哲祯 卢金成 +3 位作者 程寒 廖嘉琪 徐夫元 段宁 《有色金属(冶炼部分)》 北大核心 2025年第9期13-24,共12页
电积是目前最为常用的铜电解液净化工艺,其出口铜离子浓度波动大、人工调控难度高,易造成后续硫化单元处理负荷剧增及铜砷共沉淀产废量增大,而传统预测模型存在不可解释、稳态限制、低泛化能力等缺陷。为此,构建了企业电积生产过程电压... 电积是目前最为常用的铜电解液净化工艺,其出口铜离子浓度波动大、人工调控难度高,易造成后续硫化单元处理负荷剧增及铜砷共沉淀产废量增大,而传统预测模型存在不可解释、稳态限制、低泛化能力等缺陷。为此,构建了企业电积生产过程电压及出液铜离子浓度准确预测的多参数模型。通过对比研究10种机器学习模型,发现GBR在电压预测中表现最优(决定系数R^(2)=0.79,均方误差MSE=1.25),XGBoost对出液铜离子浓度的预测准确度最高(R^(2)=0.87,MSE=5.58)。SHAP解释性分析表明,电流和时间分别是影响电压和出液铜离子浓度变化的主控因素。模型决策机制与电化学原理及质量守恒定律一致,突破了传统模型对非线性关系的表征局限,为异常工况的预警诊断、关键参数动态优化控制及减少污染物产生提供依据。 展开更多
关键词 铜电积 机器学习 Gradient Boosting Regression(GBR) eXtreme Gradient Boosting(XGBoost) 解释性分析 Shapley Additive exPlanations(SHAP)
在线阅读 下载PDF
Enhanced Wheat Disease Detection Using Deep Learning and Explainable AI Techniques
12
作者 Hussam Qushtom Ahmad Hasasneh Sari Masri 《Computers, Materials & Continua》 2025年第7期1379-1395,共17页
This study presents an enhanced convolutional neural network(CNN)model integrated with Explainable Artificial Intelligence(XAI)techniques for accurate prediction and interpretation of wheat crop diseases.The aim is to... This study presents an enhanced convolutional neural network(CNN)model integrated with Explainable Artificial Intelligence(XAI)techniques for accurate prediction and interpretation of wheat crop diseases.The aim is to streamline the detection process while offering transparent insights into the model’s decision-making to support effective disease management.To evaluate the model,a dataset was collected from wheat fields in Kotli,Azad Kashmir,Pakistan,and tested across multiple data splits.The proposed model demonstrates improved stability,faster conver-gence,and higher classification accuracy.The results show significant improvements in prediction accuracy and stability compared to prior works,achieving up to 100%accuracy in certain configurations.In addition,XAI methods such as Local Interpretable Model-agnostic Explanations(LIME)and Shapley Additive Explanations(SHAP)were employed to explain the model’s predictions,highlighting the most influential features contributing to classification decisions.The combined use of CNN and XAI offers a dual benefit:strong predictive performance and clear interpretability of outcomes,which is especially critical in real-world agricultural applications.These findings underscore the potential of integrating deep learning models with XAI to advance automated plant disease detection.The study offers a precise,reliable,and interpretable solution for improving wheat production and promoting agricultural sustainability.Future extensions of this work may include scaling the dataset across broader regions and incorporating additional modalities such as environmental data to enhance model robustness and generalization. 展开更多
关键词 Convolutional neural network(CNN) wheat crop disease deep learning disease detection shapley additive explanations(SHAP) local interpretable model-agnostic explanations(LIME)
在线阅读 下载PDF
Explainable artificial intelligence model for the prediction of undrained shear strength
13
作者 Ho-Hong-Duy Nguyen Thanh-Nhan Nguyen +3 位作者 Thi-Anh-Thu Phan Ngoc-Thi Huynh Quoc-Dat Huynh Tan-Tai Trieu 《Theoretical & Applied Mechanics Letters》 2025年第3期284-295,共12页
Machine learning(ML)models are widely used for predicting undrained shear strength(USS),but interpretability has been a limitation in various studies.Therefore,this study introduced shapley additive explanations(SHAP)... Machine learning(ML)models are widely used for predicting undrained shear strength(USS),but interpretability has been a limitation in various studies.Therefore,this study introduced shapley additive explanations(SHAP)to clarify the contribution of each input feature in USS prediction.Three ML models,artificial neural network(ANN),extreme gradient boosting(XGBoost),and random forest(RF),were employed,with accuracy evaluated using mean squared error,mean absolute error,and coefficient of determination(R^(2)).The RF achieved the highest performance with an R^(2) of 0.82.SHAP analysis identified pre-consolidation stress as a key contributor to USS prediction.SHAP dependence plots reveal that the ANN captures smoother,linear feature-output relationships,while the RF handles complex,non-linear interactions more effectively.This suggests a non-linear relationship between USS and input features,with RF outperforming ANN.These findings highlight SHAP’s role in enhancing interpretability and promoting transparency and reliability in ML predictions for geotechnical applications. 展开更多
关键词 Prediction of undrained shear strength Explanation model Shapley additive explanation model Explainable AI
在线阅读 下载PDF
Early warning system for risk assessment in geotechnical engineering using Kolmogorov-Arnold networks
14
作者 Shan Lin Miao Dong +3 位作者 Hongwei Guo Lele Zheng Kaiyang Zhao Hong Zheng 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第12期8088-8113,共26页
In this study,we used the Kolmogorov-Arnold networks(KAN)model based on the Kolmogorov-Arnold representation theorem for a comprehensive and fair evaluation.We compare its performance with four other powerful classifi... In this study,we used the Kolmogorov-Arnold networks(KAN)model based on the Kolmogorov-Arnold representation theorem for a comprehensive and fair evaluation.We compare its performance with four other powerful classification models across three datasets:a simple slope binary classification dataset,an imbalanced rockburst dataset,and a highly discrete liquefaction dataset.First,a thorough review of machine-learning algorithms for geohazard assessment was conducted.Subsequently,three datasets were collected from real engineering practices,and their data structures were visualized.Bayesian optimization was then used to adjust the parameters of all models across all datasets.To ensure model interpretability,a global sensitivity analysis based on Sobol indices was performed,establishing an interpretable visual analysis of the model's decision-making process.For a fair evaluation,various metrics and repeated stratified 10-fold cross-validation were employed to comprehensively analyze the predictive results of the models.The results indicate that although the KAN model,based on the RBF kernel,achieves the expected performance on the binary classification dataset,it also performs well on imbalanced and highly discrete datasets,significantly surpassing other commonly used classification models.This demonstrated the broad application potential of the KAN model in geotechnical engineering. 展开更多
关键词 Deep learning Kolmogorov-arnold representation theorem Kolmogorov-arnold networks(KAN) Slope ROCKBURST LIQUEFACTION Model explanation
在线阅读 下载PDF
XAI: Navigating the future of autonomous ships
15
作者 李朦 《疯狂英语(新读写)》 2025年第8期24-27,77,共5页
The Titanic sunk 113 years ago on April 14-15,after hitting an iceberg,with human error likely causing the ship to wander into those dangerous waters.Today,autonomous systems built on AI can help ships avoid such acci... The Titanic sunk 113 years ago on April 14-15,after hitting an iceberg,with human error likely causing the ship to wander into those dangerous waters.Today,autonomous systems built on AI can help ships avoid such accidents.But could such a system explain to the captain why it was controlling the ship in a certain way? 展开更多
关键词 controlling ship navigation safety autonomous ships iceberg avoidance AI captain explanation human error
在线阅读 下载PDF
MMGCF: Generating Counterfactual Explanations for Molecular Property Prediction via Motif Rebuild
16
作者 Xiuping Zhang Qun Liu Rui Han 《Journal of Computer and Communications》 2025年第1期152-168,共17页
Predicting molecular properties is essential for advancing for advancing drug discovery and design. Recently, Graph Neural Networks (GNNs) have gained prominence due to their ability to capture the complex structural ... Predicting molecular properties is essential for advancing for advancing drug discovery and design. Recently, Graph Neural Networks (GNNs) have gained prominence due to their ability to capture the complex structural and relational information inherent in molecular graphs. Despite their effectiveness, the “black-box” nature of GNNs remains a significant obstacle to their widespread adoption in chemistry, as it hinders interpretability and trust. In this context, several explanation methods based on factual reasoning have emerged. These methods aim to interpret the predictions made by GNNs by analyzing the key features contributing to the prediction. However, these approaches fail to answer critical questions: “How to ensure that the structure-property mapping learned by GNNs is consistent with established domain knowledge”. In this paper, we propose MMGCF, a novel counterfactual explanation framework designed specifically for the prediction of GNN-based molecular properties. MMGCF constructs a hierarchical tree structure on molecular motifs, enabling the systematic generation of counterfactuals through motif perturbations. This framework identifies causally significant motifs and elucidates their impact on model predictions, offering insights into the relationship between structural modifications and predicted properties. Our method demonstrates its effectiveness through comprehensive quantitative and qualitative evaluations of four real-world molecular datasets. 展开更多
关键词 INTERPRETABILITY Causal Relationship Counterfactual Explanation Molecular Graph Generation
在线阅读 下载PDF
Prediction of parastomal hernia in patients undergoing preventive ostomy after rectal cancer resection using machine learning
17
作者 Wang-Shuo Yang Yang Su +3 位作者 Yan-Qi Li Jun-Bo Hu Meng-Die Liu Lu Liu 《World Journal of Gastrointestinal Surgery》 2025年第9期197-205,共9页
BACKGROUND Parastomal hernia(PSH)is a common and challenging complication following preventive ostomy in rectal cancer patients,lacking accurate tools for early risk prediction.AIM To explore the application of machin... BACKGROUND Parastomal hernia(PSH)is a common and challenging complication following preventive ostomy in rectal cancer patients,lacking accurate tools for early risk prediction.AIM To explore the application of machine learning algorithms in predicting the occurrence of PSH in patients undergoing preventive ostomy after rectal cancer resection,providing valuable support for clinical decision-making.METHODS A retrospective analysis was conducted on the clinical data of 579 patients who underwent rectal cancer resection with preventive ostomy at Tongji Hospital,Huazhong University of Science and Technology,between January 2015 and June 2023.Various machine learning models were constructed and trained using preoperative and intraoperative clinical variables to assess their predictive performance for PSH risk.SHapley Additive exPlanations(SHAP)were used to analyze the importance of features in the models.RESULTS A total of 579 patients were included,with 31(5.3%)developing PSH.Among the machine learning models,the random forest(RF)model showed the best performance.In the test set,the RF model achieved an area under the curve of 0.900,sensitivity of 0.900,and specificity of 0.725.SHAP analysis revealed that tumor distance from the anal verge,body mass index,and preoperative hypertension were the key factors influencing the occurrence of PSH.CONCLUSION Machine learning,particularly the RF model,demonstrates high accuracy and reliability in predicting PSH after preventive ostomy in rectal cancer patients.This technology supports personalized risk assessment and postoperative management,showing significant potential for clinical application.An online predictive platform based on the RF model(https://yangsu2023.shinyapps.io/parastomal_hernia/)has been developed to assist in early screening and intervention for high-risk patients,further enhancing postoperative management and improving patients’quality of life. 展开更多
关键词 Machine learning Rectal cancer Parastomal Hernia SHapley Additive exPlanation algorithms Predictive model
暂未订购
Prediction and Sensitivity Analysis of Foam Concrete Compressive Strength Based on Machine Learning Techniques with Hyperparameter Optimization
18
作者 Sen Yang Jie Zhong +5 位作者 Boyu Gan Yi Sun Changming Bu Mingtao Zhang Jiehong Li Yang Yu 《Computer Modeling in Engineering & Sciences》 2025年第9期2943-2967,共25页
Foam concrete is widely used in engineering due to its lightweight and high porosity.Its compressive strength,a key performance indicator,is influenced by multiple factors,showing nonlinear variation.As compressive st... Foam concrete is widely used in engineering due to its lightweight and high porosity.Its compressive strength,a key performance indicator,is influenced by multiple factors,showing nonlinear variation.As compressive strength tests for foam concrete take a long time,a fast and accurate prediction method is needed.In recent years,machine learning has become a powerful tool for predicting the compressive strength of cement-based materials.However,existing studies often use a limited number of input parameters,and the prediction accuracy of machine learning models under the influence of multiple parameters and nonlinearity remains unclear.This study selects foam concrete density,water-to-cement ratio(W/C),supplementary cementitious material replacement rate(SCM),fine aggregate to binder ratio(FA/Binder),superplasticizer content(SP),and age of the concrete(Age)as input parameters,with compressive strength as the output.Five different machine learning models were compared,and sensitivity analysis,based on Shapley Additive Explanations(SHAP),was used to assess the contribution of each input parameter.The results show that Gaussian Process Regression(GPR)outperforms the other models,with R2,RMSE,MAE,and MAPE values of 0.95,1.6,0.81,and 0.2,respectively.It is because GPR,optimized through Bayesian methods,better fits complex nonlinear relationships,especially considering a large number of input parameters.Sensitivity analysis indicates that the influence of input parameters on compressive strength decreases in the following order:foam concrete density,W/C,Age,FA/Binder,SP,and SCM. 展开更多
关键词 Foam concrete compressive strength machine learning Gaussian grocess regression shapley additive explanations
在线阅读 下载PDF
Serum calcium-based interpretable machine learning model for predicting anastomotic leakage after rectal cancer resection:A multi-center study
19
作者 Bo-Yu Kang Yi-Huan Qiao +4 位作者 Jun Zhu Bao-Liang Hu Ze-Cheng Zhang Ji-Peng Li Yan-Jiang Pei 《World Journal of Gastroenterology》 2025年第19期86-99,共14页
BACKGROUND Despite the promising prospects of utilizing artificial intelligence and machine learning(ML)for comprehensive disease analysis,few models constructed have been applied in clinical practice due to their com... BACKGROUND Despite the promising prospects of utilizing artificial intelligence and machine learning(ML)for comprehensive disease analysis,few models constructed have been applied in clinical practice due to their complexity and the lack of reasonable explanations.In contrast to previous studies with small sample sizes and limited model interpretability,we developed a transparent eXtreme Gradient Boosting(XGBoost)-based model supported by multi-center data,using patients'basic information and clinical indicators to forecast the occurrence of anastomotic leakage(AL)after rectal cancer resection surgery.The model demonstrated robust predictive performance and identified clinically relevant thresholds,which may assist physicians in optimizing perioperative management.AIM To develop an interpretable ML model for accurately predicting the occurrence probability of AL after rectal cancer resection and define our clinical alert values for serum calcium ions.METHODS Patients who underwent anterior resection of the rectum for rectal carcinoma at the Department of Digestive Surgery,Xijing Hospital of Digestive Diseases,Air Force Medical University,and Shaanxi Provincial People's Hospital,were retrospectively collected from January 2011 to December 2021.Ten ML models were integrated to analyze the data and develop the predictive models.Receiver operating characteristic(ROC)curves,calibration curve,decision curve analysis,accuracy,sensitivity,specificity,positive predictive value,negative predictive value,and F1 score were used to evaluate model performance.We employed the SHapley Additive exPlanations(SHAP)algorithm to explain the feature importance of the optimal model.RESULTS A total of ten features were integrated to construct the predictive model and identify the optimal model.XGBoost was considered the best-performing model with an area under the ROC curve(AUC)of 0.984(95%confidence interval:0.972-0.996)in the test set(accuracy:0.925;sensitivity:0.92;specificity:0.927).Furthermore,the model achieved an AUC of 0.703 in external validation.The interpretable SHAP algorithm revealed that the serum calcium ion level was the crucial factor influencing the predictions of the model.CONCLUSION A superior predictive model,leveraging clinical data,has been crafted by employing the most effective XGBoost from a selection of ten algorithms.This model,by predicting the occurrence of AL in patients after rectal cancer resection,has identified the significant role of serum calcium ion levels,providing guidance for clinical practice.The integration of SHAP provides a clear interpretation of the model's predictions. 展开更多
关键词 Machine learning Rectal cancer Anastomotic leakage SHapley Additive exPlanations algorithms
暂未订购
Addressing accuracy challenges in machine learning for debris flow susceptibility:Insights from the Yalong River basin
20
作者 MING Zaiyang ZHANG Jianqiang +3 位作者 HE Haiqing ZHANG Lili CHEN Rong JIA Yang 《Journal of Mountain Science》 2025年第6期2034-2052,共19页
Machine learning-based Debris Flow Susceptibility Mapping(DFSM)has emerged as an effective approach for assessing debris flow likelihood,yet its application faces three critical challenges:insufficient reliability of ... Machine learning-based Debris Flow Susceptibility Mapping(DFSM)has emerged as an effective approach for assessing debris flow likelihood,yet its application faces three critical challenges:insufficient reliability of training samples caused by biased negative sampling,opaque decision-making mechanisms in models,and subjective susceptibility mapping methods that lack quantitative evaluation criteria.This study focuses on the Yalong River basin.By integrating high-resolution remote sensing interpretation and field surveys,we established a refined sample database that includes 1,736 debris flow gullies.To address spatial bias in traditional random negative sampling,we developed a semi-supervised optimization strategy based on iterative confidence screening.Comparative experiments with four treebased models(XGBoost,CatBoost,LGBM,and Random Forest)reveal that the optimized sampling strategy improved overall model performance by 8%-12%,with XGBoost achieving the highest accuracy(AUC=0.882)and RF performing the lowest(AUC=0.820).SHAP-based global-local interpretability analysis(applicable to all tree models)identifies elevation and short-duration rainfall as dominant controlling factors.Furthermore,among the tested tree-based models,XGBoost optimized with semisupervised sampling demonstrates the highest reliability in debris flow susceptibility mapping(DFSM),achieving a comprehensive accuracy of 83.64%due to its optimal generalization-stability equilibrium. 展开更多
关键词 Debris flow Susceptibility mapping Accuracy assessment Yalong River basin Machine learning SHapley Additive exPlanations
原文传递
上一页 1 2 14 下一页 到第
使用帮助 返回顶部