期刊文献+
共找到267篇文章
< 1 2 14 >
每页显示 20 50 100
Extreme gradient boosting with Shapley Additive Explanations for landslide susceptibility at slope unit and hydrological response unit scales
1
作者 Ananta Man Singh Pradhan Pramit Ghimire +3 位作者 Suchita Shrestha Ji-Sung Lee Jung-Hyun Lee Hyuck-Jin Park 《Geoscience Frontiers》 2025年第4期357-372,共16页
This study provides an in-depth comparative evaluation of landslide susceptibility using two distinct spatial units:and slope units(SUs)and hydrological response units(HRUs),within Goesan County,South Korea.Leveraging... This study provides an in-depth comparative evaluation of landslide susceptibility using two distinct spatial units:and slope units(SUs)and hydrological response units(HRUs),within Goesan County,South Korea.Leveraging the capabilities of the extreme gradient boosting(XGB)algorithm combined with Shapley Additive Explanations(SHAP),this work assesses the precision and clarity with which each unit predicts areas vulnerable to landslides.SUs focus on the geomorphological features like ridges and valleys,focusing on slope stability and landslide triggers.Conversely,HRUs are established based on a variety of hydrological factors,including land cover,soil type and slope gradients,to encapsulate the dynamic water processes of the region.The methodological framework includes the systematic gathering,preparation and analysis of data,ranging from historical landslide occurrences to topographical and environmental variables like elevation,slope angle and land curvature etc.The XGB algorithm used to construct the Landslide Susceptibility Model(LSM)was combined with SHAP for model interpretation and the results were evaluated using Random Cross-validation(RCV)to ensure accuracy and reliability.To ensure optimal model performance,the XGB algorithm’s hyperparameters were tuned using Differential Evolution,considering multicollinearity-free variables.The results show that SU and HRU are effective for LSM,but their effectiveness varies depending on landscape characteristics.The XGB algorithm demonstrates strong predictive power and SHAP enhances model transparency of the influential variables involved.This work underscores the importance of selecting appropriate assessment units tailored to specific landscape characteristics for accurate LSM.The integration of advanced machine learning techniques with interpretative tools offers a robust framework for landslide susceptibility assessment,improving both predictive capabilities and model interpretability.Future research should integrate broader data sets and explore hybrid analytical models to strengthen the generalizability of these findings across varied geographical settings. 展开更多
关键词 Landslide susceptibility mapping Hydrological response units Slope units Extreme gradient boosting Hyper parameter tuning Shapley additive explanations
在线阅读 下载PDF
A Study on the Inter-Pretability of Network Attack Prediction Models Based on Light Gradient Boosting Machine(LGBM)and SHapley Additive exPlanations(SHAP)
2
作者 Shuqin Zhang Zihao Wang Xinyu Su 《Computers, Materials & Continua》 2025年第6期5781-5809,共29页
The methods of network attacks have become increasingly sophisticated,rendering traditional cybersecurity defense mechanisms insufficient to address novel and complex threats effectively.In recent years,artificial int... The methods of network attacks have become increasingly sophisticated,rendering traditional cybersecurity defense mechanisms insufficient to address novel and complex threats effectively.In recent years,artificial intelligence has achieved significant progress in the field of network security.However,many challenges and issues remain,particularly regarding the interpretability of deep learning and ensemble learning algorithms.To address the challenge of enhancing the interpretability of network attack prediction models,this paper proposes a method that combines Light Gradient Boosting Machine(LGBM)and SHapley Additive exPlanations(SHAP).LGBM is employed to model anomalous fluctuations in various network indicators,enabling the rapid and accurate identification and prediction of potential network attack types,thereby facilitating the implementation of timely defense measures,the model achieved an accuracy of 0.977,precision of 0.985,recall of 0.975,and an F1 score of 0.979,demonstrating better performance compared to other models in the domain of network attack prediction.SHAP is utilized to analyze the black-box decision-making process of the model,providing interpretability by quantifying the contribution of each feature to the prediction results and elucidating the relationships between features.The experimental results demonstrate that the network attack predictionmodel based on LGBM exhibits superior accuracy and outstanding predictive capabilities.Moreover,the SHAP-based interpretability analysis significantly improves the model’s transparency and interpretability. 展开更多
关键词 Artificial intelligence network attack prediction light gradient boosting machine(LGBM) SHapley Additive explanations(SHAP) INTERPRETABILITY
在线阅读 下载PDF
MMGCF: Generating Counterfactual Explanations for Molecular Property Prediction via Motif Rebuild
3
作者 Xiuping Zhang Qun Liu Rui Han 《Journal of Computer and Communications》 2025年第1期152-168,共17页
Predicting molecular properties is essential for advancing for advancing drug discovery and design. Recently, Graph Neural Networks (GNNs) have gained prominence due to their ability to capture the complex structural ... Predicting molecular properties is essential for advancing for advancing drug discovery and design. Recently, Graph Neural Networks (GNNs) have gained prominence due to their ability to capture the complex structural and relational information inherent in molecular graphs. Despite their effectiveness, the “black-box” nature of GNNs remains a significant obstacle to their widespread adoption in chemistry, as it hinders interpretability and trust. In this context, several explanation methods based on factual reasoning have emerged. These methods aim to interpret the predictions made by GNNs by analyzing the key features contributing to the prediction. However, these approaches fail to answer critical questions: “How to ensure that the structure-property mapping learned by GNNs is consistent with established domain knowledge”. In this paper, we propose MMGCF, a novel counterfactual explanation framework designed specifically for the prediction of GNN-based molecular properties. MMGCF constructs a hierarchical tree structure on molecular motifs, enabling the systematic generation of counterfactuals through motif perturbations. This framework identifies causally significant motifs and elucidates their impact on model predictions, offering insights into the relationship between structural modifications and predicted properties. Our method demonstrates its effectiveness through comprehensive quantitative and qualitative evaluations of four real-world molecular datasets. 展开更多
关键词 INTERPRETABILITY Causal Relationship Counterfactual Explanation Molecular Graph Generation
在线阅读 下载PDF
Research on the Issue of False Explanations in Artificial Intelligence for Medical Image Analysis
4
作者 Weihan Jia 《Expert Review of Chinese Medical》 2025年第3期24-32,共9页
Deep learning models have become a core technological tool in the field of medical image analysis.However,these models often suffer from a lack of transparency in their decision-making processes,leading to challenges ... Deep learning models have become a core technological tool in the field of medical image analysis.However,these models often suffer from a lack of transparency in their decision-making processes,leading to challenges related to trust and interpret ability in clinical applications.To address this issue,explainable artificial intelligence(XAI)techniques have been applied to medical image analysis.While showing promising potential,XAI also brings significant ethical risks in practice—most notably,the problem of spurious explanations.Such explanations may rise further concerns regarding patient privacy,data security,and the attribution of decisionmaking authority in medical contexts.This paper analyzes the application of XAI methods—particularly saliency aps—in medical image interpretation,identifies the underlying causes of spurious explanations,and proposes possible mitigation strategies.The aim is to contribute to the responsible and sustainable integration of explainable AI into clinical practice. 展开更多
关键词 medical image analysis explainable artificial intelligence spurious explanation
在线阅读 下载PDF
Investigation of feature contribution to shield tunneling-induced settlement using Shapley additive explanations method 被引量:16
5
作者 K.K.Pabodha M.Kannangara Wanhuan Zhou +1 位作者 Zhi Ding Zhehao Hong 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2022年第4期1052-1063,共12页
Accurate prediction of shield tunneling-induced settlement is a complex problem that requires consideration of many influential parameters.Recent studies reveal that machine learning(ML)algorithms can predict the sett... Accurate prediction of shield tunneling-induced settlement is a complex problem that requires consideration of many influential parameters.Recent studies reveal that machine learning(ML)algorithms can predict the settlement caused by tunneling.However,well-performing ML models are usually less interpretable.Irrelevant input features decrease the performance and interpretability of an ML model.Nonetheless,feature selection,a critical step in the ML pipeline,is usually ignored in most studies that focused on predicting tunneling-induced settlement.This study applies four techniques,i.e.Pearson correlation method,sequential forward selection(SFS),sequential backward selection(SBS)and Boruta algorithm,to investigate the effect of feature selection on the model’s performance when predicting the tunneling-induced maximum surface settlement(S_(max)).The data set used in this study was compiled from two metro tunnel projects excavated in Hangzhou,China using earth pressure balance(EPB)shields and consists of 14 input features and a single output(i.e.S_(max)).The ML model that is trained on features selected from the Boruta algorithm demonstrates the best performance in both the training and testing phases.The relevant features chosen from the Boruta algorithm further indicate that tunneling-induced settlement is affected by parameters related to tunnel geometry,geological conditions and shield operation.The recently proposed Shapley additive explanations(SHAP)method explores how the input features contribute to the output of a complex ML model.It is observed that the larger settlements are induced during shield tunneling in silty clay.Moreover,the SHAP analysis reveals that the low magnitudes of face pressure at the top of the shield increase the model’s output。 展开更多
关键词 feature Selection Shield operational parameters Pearson correlation method Boruta algorithm Shapley additive explanations(SHAP) analysis
在线阅读 下载PDF
CONSORT 2010 checklist of information to include when reporting a randomised trial and further explanations
6
《Neural Regeneration Research》 SCIE CAS CSCD 2011年第28期2237-2240,共4页
关键词 WHEN CONSORT 2010 checklist of information to include when reporting a randomised trial and further explanations 2010
暂未订购
Review on Gesture and Speech in the Vocabulary Explanations of One ESL Teacher: A Microanalytic Inquiry" by Anne Lazaraton
7
作者 ZHANG Zi-hong 《Sino-US English Teaching》 2011年第12期747-753,共7页
This paper takes a microanalytic perspective on the speech and gestures used by one teacher of ESL (English as a Second Language) in an intensive English program classroom. Videotaped excerpts from her intermediate-... This paper takes a microanalytic perspective on the speech and gestures used by one teacher of ESL (English as a Second Language) in an intensive English program classroom. Videotaped excerpts from her intermediate-level grammar course were transcribed to represent the speech, gesture and other non-verbal behavior that accompanied unplanned explanations of vocabulary that arose during three focus-on-form lessons. The gesture classification system of McNeill (1992), which delineates different types of hand movements (iconics metaphorics, deictics, beats), was used to understand the role the gestures played in these explanations. Results suggest that gestures and other non-verbal behavior are forms of input to classroom second language learners that must be considered a salient factor in classroom-based SLA (Second Language Acquisition) research 展开更多
关键词 speech and gestures vocabulary explanations ESL (English as a Second Language) Anne Lazaraton
在线阅读 下载PDF
Explaining How: The Intelligibility of Mechanical Explanations in Boyle
8
作者 Jan-Erik Jones 《Journal of Philosophy Study》 2012年第5期337-346,共10页
In this paper I examine the following claims by William Eaton in his monograph Boyle on Fire: (i) that Boyle's religious convictions led him to believe that the world was not completely explicable, and this shows ... In this paper I examine the following claims by William Eaton in his monograph Boyle on Fire: (i) that Boyle's religious convictions led him to believe that the world was not completely explicable, and this shows that there is a shortcoming in the power of mechanical explanations; (ii) that mechanical explanations offer only sufficient, not necessary explanations, and this too was taken by Boyle to be a limit in the explanatory power of mechanical explanations; (iii) that the mature Boyle thought that there could be more intelligible explanatory models than mechanism; and (iv) that what Boyle says at any point in his career is incompatible with the statement of Maria Boas-Hall, i.e., that the mechanical hypothesis can explicate all natural phenomena. Since all four of these claims are part of Eaton's developmental argument, my rejection of them will not only show how the particular developmental story Eaton diagnoses is inaccurate, but will also explain what limits there actually are in Boyle's account of the intelligibility of mechanical explanations. My account will also show why important philosophers like Locke and Leibniz should be interested in Boyle's philosophical work. 展开更多
关键词 Robert Boyle William Eaton Maria Boas-Hall mechanism EXPLANATION INTELLIGIBILITY
在线阅读 下载PDF
Investigating Black-Box Model for Wind Power Forecasting Using Local Interpretable Model-Agnostic Explanations Algorithm 被引量:1
9
作者 Mao Yang Chuanyu Xu +2 位作者 Yuying Bai Miaomiao Ma Xin Su 《CSEE Journal of Power and Energy Systems》 2025年第1期227-242,共16页
Wind power forecasting(WPF)is important for safe,stable,and reliable integration of new energy technologies into power systems.Machine learning(ML)algorithms have recently attracted increasing attention in the field o... Wind power forecasting(WPF)is important for safe,stable,and reliable integration of new energy technologies into power systems.Machine learning(ML)algorithms have recently attracted increasing attention in the field of WPF.However,opaque decisions and lack of trustworthiness of black-box models for WPF could cause scheduling risks.This study develops a method for identifying risky models in practical applications and avoiding the risks.First,a local interpretable model-agnostic explanations algorithm is introduced and improved for WPF model analysis.On that basis,a novel index is presented to quantify the level at which neural networks or other black-box models can trust features involved in training.Then,by revealing the operational mechanism for local samples,human interpretability of the black-box model is examined under different accuracies,time horizons,and seasons.This interpretability provides a basis for several technical routes for WPF from the viewpoint of the forecasting model.Moreover,further improvements in accuracy of WPF are explored by evaluating possibilities of using interpretable ML models that use multi-horizons global trust modeling and multi-seasons interpretable feature selection methods.Experimental results from a wind farm in China show that error can be robustly reduced. 展开更多
关键词 Black-box model correlation analysis feature trust index local interpretability local interpretable modelagnostic explanations(LIME) wind power forecasting
原文传递
Applications of Large Multimodal Models(LMMs)in STEM Education:From Visual Explanations to Virtual Experiments
10
作者 Changkui LI 《Artificial Intelligence Education Studies》 2025年第2期1-18,共18页
Generative Artificial Intelligence(GAI)refers to a class of AI systems capable of creating novel,coherent,and contextually relevant content—such as text,images,audio,and video—based on patterns learned from extensiv... Generative Artificial Intelligence(GAI)refers to a class of AI systems capable of creating novel,coherent,and contextually relevant content—such as text,images,audio,and video—based on patterns learned from extensive training datasets.The public release and rapid refinement of large language models(LLMs)like ChatGPT have accelerated the adoption of GAI across various medical specialties,offering new tools for education,clinical simulation,and research.Dermatology training,which heavily relies on visual pattern recognition and requires extensive exposure to diverse morphological presentations,faces persistent challenges such as uneven distribu-tion of educational resources,limited patient exposure for rare conditions,and variability in teaching quality.Exploring the integration of GAI into pedagogical frameworks offers innovative approaches to address these challenges,potentially enhancing the quality,standardization,scalability,and accessibility of dermatology ed-ucation.This comprehensive review examines the core concepts and technical foundations of GAI,highlights its specific applications within dermatology teaching and learning—including simulated case generation,per-sonalized learning pathways,and academic support—and discusses the current limitations,practical challenges,and ethical considerations surrounding its use.The aim is to provide a balanced perspective on the significant potential of GAI for transforming dermatology education and to offer evidence-based insights to guide future exploration,implementation,and policy development. 展开更多
关键词 Large Multimodal Models(LMMs) STEM Education Visual explanations Virtual Laboratories/Virtual Experiments Critical AI Literacy
在线阅读 下载PDF
A Deep Learning Framework for Heart Disease Prediction with Explainable Artificial Intelligence
11
作者 Muhammad Adil Nadeem Javaid +2 位作者 Imran Ahmed Abrar Ahmed Nabil Alrajeh 《Computers, Materials & Continua》 2026年第1期1944-1963,共20页
Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learni... Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learning(DL)approaches often face several limitations,including inefficient feature extraction,class imbalance,suboptimal classification performance,and limited interpretability,which collectively hinder their deployment in clinical settings.To address these challenges,we propose a novel DL framework for heart disease prediction that integrates a comprehensive preprocessing pipeline with an advanced classification architecture.The preprocessing stage involves label encoding and feature scaling.To address the issue of class imbalance inherent in the personal key indicators of the heart disease dataset,the localized random affine shadowsampling technique is employed,which enhances minority class representation while minimizing overfitting.At the core of the framework lies the Deep Residual Network(DeepResNet),which employs hierarchical residual transformations to facilitate efficient feature extraction and capture complex,non-linear relationships in the data.Experimental results demonstrate that the proposed model significantly outperforms existing techniques,achieving improvements of 3.26%in accuracy,3.16%in area under the receiver operating characteristics,1.09%in recall,and 1.07%in F1-score.Furthermore,robustness is validated using 10-fold crossvalidation,confirming the model’s generalizability across diverse data distributions.Moreover,model interpretability is ensured through the integration of Shapley additive explanations and local interpretable model-agnostic explanations,offering valuable insights into the contribution of individual features to model predictions.Overall,the proposed DL framework presents a robust,interpretable,and clinically applicable solution for heart disease prediction. 展开更多
关键词 Heart disease deep learning localized random affine shadowsampling local interpretable modelagnostic explanations shapley additive explanations 10-fold cross-validation
在线阅读 下载PDF
Inverse Design of Composite Materials Based on Latent Space and Bayesian Optimization
12
作者 Xianrui Lyu Xiaodan Ren 《Computer Modeling in Engineering & Sciences》 2026年第1期1-25,共25页
Inverse design of advanced materials represents a pivotal challenge in materials science.Leveraging the latent space of Variational Autoencoders(VAEs)for material optimization has emerged as a significant advancement ... Inverse design of advanced materials represents a pivotal challenge in materials science.Leveraging the latent space of Variational Autoencoders(VAEs)for material optimization has emerged as a significant advancement in the field of material inverse design.However,VAEs are inherently prone to generating blurred images,posing challenges for precise inverse design and microstructure manufacturing.While increasing the dimensionality of the VAE latent space can mitigate reconstruction blurriness to some extent,it simultaneously imposes a substantial burden on target optimization due to an excessively high search space.To address these limitations,this study adopts a Variational Autoencoder guided Conditional Diffusion Generative Model(VAE-CDGM)framework integrated with Bayesian optimization to achieve the inverse design of composite materials with targeted mechanical properties.The VAE-CDGM model synergizes the strengths of VAEs and Denoising Diffusion Probabilistic Models(DDPM),enabling the generation of high-quality,sharp images while preserving a manipulable latent space.To accommodate varying dimensional requirements of the latent space,two optimization strategies are proposed.When the latent space dimensionality is excessively high,SHapley Additive exPlanations(SHAP)sensitivity analysis is employed to identify critical latent features for optimization within a reduced subspace.Conversely,direct optimization is performed in the low-dimensional latent space of VAE-CDGM when dimensionality is modest.The results demonstrate that both strategies accurately achieve the targeted design of composite materials while circumventing the blurred reconstruction flaws of VAEs,which offers a novel pathway for the precise design of advanced materials. 展开更多
关键词 Variational autoencoder denoising diffusion generation model composite materials Bayesian opti-mization SHapley Additive explanations
在线阅读 下载PDF
Multi-source remote sensing and machine learning reveal spatiotemporal variations and drivers of NPP in the Tianshan Mountains,China
13
作者 LI Jiani XU Denghui +2 位作者 XU Zhonglin WANG Yao YANG Jianjun 《Journal of Arid Land》 2026年第1期56-83,共28页
Arid mountain ecosystems are highly sensitive to hydrothermal stress and land use intensification,yet where net primary productivity(NPP)degradation is likely to persist and what drives it remain unclear in the Tiansh... Arid mountain ecosystems are highly sensitive to hydrothermal stress and land use intensification,yet where net primary productivity(NPP)degradation is likely to persist and what drives it remain unclear in the Tianshan Mountains of Northwest China.We integrated multi-source remote sensing with the Carnegie–Ames–Stanford Approach(CASA)model to estimate NPP during 2000–2020,assessed trend persistence using the Hurst exponent,and identified key drivers and nonlinear thresholds with Extreme Gradient Boosting(XGBoost)and SHapley Additive exPlanations(SHAP).Total NPP averaged 55.74 Tg C/a and ranged from 48.07 to 65.91 Tg C/a from 2000 to 2020,while regional mean NPP rose from 138.97 to 160.69 g C/(m^(2)·a).Land use transfer analysis showed that grassland expanded mainly at the expense of unutilized land and that cropland increased overall.Although NPP increased across 64.11%of the region during 2000–2020,persistence analysis suggested that 53.93%of the Tianshan Mountains was prone to continued NPP decline,including 36.41%with significant projected decline and 17.52%with weak projected decline;these areas formed degradation hotspots concentrated in the central and northern Tianshan Mountains.In contrast,potential improvement was limited(strong persistent improvement:4.97%;strong anti-persistent improvement:0.36%).Driver attribution indicated that land use dominated NPP variability(mean absolute SHAP value=29.54%),followed by precipitation(16.03%)and temperature(11.05%).SHAP dependence analyses showed that precipitation effects stabilized at 300.00–400.00 mm,and temperature exhibited an inverted U-shaped response with a peak near 0.00°C.These findings indicated that persistent degradation risk arose from hydrothermal constraints interacting with land use conversion,highlighting the need for threshold-informed,spatially targeted management to sustain carbon sequestration in arid mountain ecosystems. 展开更多
关键词 net primary productivity(NPP) Carnegie-Ames-Stanford Approach(CASA) Hurst exponent land use change Extreme Gradient Boosting(XGBoost) SHapley Additive explanations(SHAP) hydrothermal thresholds
在线阅读 下载PDF
利用气象和空气污染因素预测呼吸系统疾病死亡的机器学习应用——以北京市海淀区为例
14
作者 陈剑铭 王晴 +3 位作者 徐鑫 马子昂 伯鑫 李杨 《北京化工大学学报(自然科学版)》 北大核心 2025年第6期1-9,共9页
随着城市化和工业化的发展,日益严峻的空气污染形势和频发的极端天气事件对公共健康构成了重要威胁。本研究旨在评估气象因子及空气污染对呼吸系统疾病死亡的影响。以2014年1月1日至2024年7月31日中国北京市海淀区的气象数据、空气污染... 随着城市化和工业化的发展,日益严峻的空气污染形势和频发的极端天气事件对公共健康构成了重要威胁。本研究旨在评估气象因子及空气污染对呼吸系统疾病死亡的影响。以2014年1月1日至2024年7月31日中国北京市海淀区的气象数据、空气污染物数据和呼吸系统疾病死亡数据为研究数据集,利用随机森林(RF)模型分析气象因素和空气污染物对呼吸系统疾病死亡的影响,并结合SHapley Additive exPlanations(SHAP)开展呼吸系统疾病死亡的影响因素分析。斯皮尔曼相关分析和RF模型结果显示,SO_(2)浓度、NO_(2)浓度、PM_(2.5)和PM_(10)与呼吸系统疾病死亡呈正相关,最低气温与呼吸系统疾病死亡呈负相关,且该模型在冬季展现出较其他季节更优的预测性能。此外,模型的SHAP全局特征结果表明最低气温是影响呼吸系统疾病死亡的最主要因素。研究结果表明RF模型具有预测呼吸系统疾病死亡的潜力,能够有效结合气象和空气污染数据进行呼吸系统疾病的预测,结合SHAP能够进一步提升机器学习模型的可解释性。本研究可为政策制定者科学制定针对性的空气质量控制措施、极端气温健康预警及季节性呼吸系统疾病防控策略提供有力的支撑。 展开更多
关键词 空气污染 气象因素 随机森林 SHapley Additive explanations(SHAP) 呼吸系统疾病死亡 相关性分析
暂未订购
融合水文模型与深度学习的青海湖流域径流模拟
15
作者 李娜 赵永 +2 位作者 梁四海 王旭升 万力 《水资源研究》 2025年第5期458-470,共13页
本文聚焦我国青海湖流域的水文过程,基于多年气象和水文动态数据,发展了一种融合概念性水文模型FLEX (FluxExchange)和门控循环单元(Gated Recurrent Unit, GRU)的混合模型对流域内最大支流布哈河的逐日径流进行了模拟和预测。在构建混... 本文聚焦我国青海湖流域的水文过程,基于多年气象和水文动态数据,发展了一种融合概念性水文模型FLEX (FluxExchange)和门控循环单元(Gated Recurrent Unit, GRU)的混合模型对流域内最大支流布哈河的逐日径流进行了模拟和预测。在构建混合模型中,采用了三种策略提升模拟精度:引入差分进化自适应算法DREAM(zs)反演水文参数优化FLEX模型;采用变分模态分解(VMD)提取径流数据的信息和特征;利用麻雀搜索算法(SSA)优化深度学习GRU的参数。研究将FLEX模型的模拟结果连同气象数据一起作为神经网络的输入,从而构建了FLEX-VMD-SSA-GRU混合模型。同时,探讨了不同的气象输入条件对模拟结果的影响和贡献:基于7个主要气象要素,由少及多设置了14组输入情景模拟。最后通过SHAP对深度学习方法的结果进行分析,揭示了气象变量对径流长期趋势的贡献和重要度。 展开更多
关键词 径流模拟 概念性水文模型FLEX 门控循环单元GRU FLEX-VMD-SSA-GRU混合模型 Shapley Additive explanations (SHAP)
在线阅读 下载PDF
Transfer learning-based encoder-decoder model with visual explanations for infrastructure crack segmentation:New open database and comprehensive evaluation 被引量:2
16
作者 Fangyu Liu Wenqi Ding +1 位作者 Yafei Qiao Linbing Wang 《Underground Space》 SCIE EI CSCD 2024年第4期60-81,共22页
Contemporary demands necessitate the swift and accurate detection of cracks in critical infrastructures,including tunnels and pavements.This study proposed a transfer learning-based encoder-decoder method with visual ... Contemporary demands necessitate the swift and accurate detection of cracks in critical infrastructures,including tunnels and pavements.This study proposed a transfer learning-based encoder-decoder method with visual explanations for infrastructure crack segmentation.Firstly,a vast dataset containing 7089 images was developed,comprising diverse conditions—simple and complex crack patterns as well as clean and rough backgrounds.Secondly,leveraging transfer learning,an encoder-decoder model with visual explanations was formulated,utilizing varied pre-trained convolutional neural network(CNN)as the encoder.Visual explanations were achieved through gradient-weighted class activation mapping(Grad-CAM)to interpret the CNN segmentation model.Thirdly,accuracy,complexity(computation and model),and memory usage assessed CNN feasibility in practical engineering.Model performance was gauged via prediction and visual explanation.The investigation encompassed hyperparameters,data augmentation,deep learning from scratch vs.transfer learning,segmentation model architectures,segmentation model encoders,and encoder pre-training strategies.Results underscored transfer learning’s potency in enhancing CNN accuracy for crack segmentation,surpassing deep learning from scratch.Notably,encoder classification accuracy bore no significant correlation with CNN segmentation accuracy.Among all tested models,UNet-EfficientNet_B7 excelled in crack segmentation,harmonizing accuracy,complexity,memory usage,prediction,and visual explanation. 展开更多
关键词 Crack segmentation Transfer learning Visual explanation INFRASTRUCTURE Database
在线阅读 下载PDF
Enhanced Wheat Disease Detection Using Deep Learning and Explainable AI Techniques
17
作者 Hussam Qushtom Ahmad Hasasneh Sari Masri 《Computers, Materials & Continua》 2025年第7期1379-1395,共17页
This study presents an enhanced convolutional neural network(CNN)model integrated with Explainable Artificial Intelligence(XAI)techniques for accurate prediction and interpretation of wheat crop diseases.The aim is to... This study presents an enhanced convolutional neural network(CNN)model integrated with Explainable Artificial Intelligence(XAI)techniques for accurate prediction and interpretation of wheat crop diseases.The aim is to streamline the detection process while offering transparent insights into the model’s decision-making to support effective disease management.To evaluate the model,a dataset was collected from wheat fields in Kotli,Azad Kashmir,Pakistan,and tested across multiple data splits.The proposed model demonstrates improved stability,faster conver-gence,and higher classification accuracy.The results show significant improvements in prediction accuracy and stability compared to prior works,achieving up to 100%accuracy in certain configurations.In addition,XAI methods such as Local Interpretable Model-agnostic Explanations(LIME)and Shapley Additive Explanations(SHAP)were employed to explain the model’s predictions,highlighting the most influential features contributing to classification decisions.The combined use of CNN and XAI offers a dual benefit:strong predictive performance and clear interpretability of outcomes,which is especially critical in real-world agricultural applications.These findings underscore the potential of integrating deep learning models with XAI to advance automated plant disease detection.The study offers a precise,reliable,and interpretable solution for improving wheat production and promoting agricultural sustainability.Future extensions of this work may include scaling the dataset across broader regions and incorporating additional modalities such as environmental data to enhance model robustness and generalization. 展开更多
关键词 Convolutional neural network(CNN) wheat crop disease deep learning disease detection shapley additive explanations(SHAP) local interpretable model-agnostic explanations(LIME)
在线阅读 下载PDF
基于随机森林与SHAP算法的致密砂岩气暂堵效果的影响因素分析
18
作者 黄浩 车恒达 +3 位作者 孔祥伟 辛富斌 向九洲 吉俊杰 《科学技术与工程》 北大核心 2025年第26期11135-11143,共9页
为深入研究地质因素、分段及射孔参数、压裂施工因素对簇间暂堵效果的影响,通过构建暂堵效果量化模型和公式,收集苏里格区块暂堵井数据76组,融合随机森林和SHAP(Shapley additive explanations)值算法,建立暂堵效果算法模型。经过对暂... 为深入研究地质因素、分段及射孔参数、压裂施工因素对簇间暂堵效果的影响,通过构建暂堵效果量化模型和公式,收集苏里格区块暂堵井数据76组,融合随机森林和SHAP(Shapley additive explanations)值算法,建立暂堵效果算法模型。经过对暂堵效果量化模型和公式、暂堵效果算法模型验证,发现暂堵效果量化值与产气贡献率正相关,P=0.037,证明暂堵效果量化模型和公式的准确性高;又因暂堵效果算法模型中,训练集与测试集的MSE、MAE、R^(2)相差微小,证明该模型的泛化能力较强且准确性高。在暂堵效果算法模型的基础之上,开展暂堵效果的影响因素分析,结果表明:总段数、渗透率、暂堵球数量、簇间距和砂比这5个因素对于暂堵效果的影响占比最大。进一步分析单影响因素,发现随总段数增加,暂堵效果增加的规律只适用于直井,对水平井不适用;随渗透率增加,暂堵效果变差;暂堵球数量<50个、簇间距>20 m、砂比介于18%~20%,暂堵效果均可达到正向增长。研究结果可为苏里格等气田现场暂堵作业设计提供借鉴和参考。 展开更多
关键词 苏里格气田 致密砂岩气 暂堵效果 随机森林 SHAP(Shapley additive explanations)值 模型解释
在线阅读 下载PDF
Explainable machine learning for predicting mechanical properties of hot-rolled steel pipe 被引量:2
19
作者 Jing-dong Li You-zhao Sun +4 位作者 Xiao-chen Wang Quan Yang Guo-dong Liu Hao-tang Qie Feng-xia Li 《Journal of Iron and Steel Research International》 2025年第8期2475-2490,共16页
Mechanical properties are critical to the quality of hot-rolled steel pipe products.Accurately understanding the relationship between rolling parameters and mechanical properties is crucial for effective prediction an... Mechanical properties are critical to the quality of hot-rolled steel pipe products.Accurately understanding the relationship between rolling parameters and mechanical properties is crucial for effective prediction and control.To address this,an industrial big data platform was developed to collect and process multi-source heterogeneous data from the entire production process,providing a complete dataset for mechanical property prediction.The adaptive bandwidth kernel density estimation(ABKDE)method was proposed to adjust bandwidth dynamically based on data density.Combining long short-term memory neural networks with ABKDE offers robust prediction interval capabilities for mechanical properties.The proposed method was deployed in a large-scale steel plant,which demonstrated superior prediction interval performance compared to lower upper bound estimation,mean variance estimation,and extreme learning machine-adaptive bandwidth kernel density estimation,achieving a prediction interval normalized average width of 0.37,a prediction interval coverage probability of 0.94,and the lowest coverage width-based criterion of 1.35.Notably,shapley additive explanations-based explanations significantly improved the proposed model’s credibility by providing a clear analysis of feature impacts. 展开更多
关键词 Mechanical property Hot-rolled steel pipe Machine learning Adaptive bandwidth kernel density estimation Shapley additive explanations-based explanation
原文传递
基于机器学习的铜电解精炼电积过程电压及出液铜离子浓度预测模型研究
20
作者 闫哲祯 卢金成 +3 位作者 程寒 廖嘉琪 徐夫元 段宁 《有色金属(冶炼部分)》 北大核心 2025年第9期13-24,共12页
电积是目前最为常用的铜电解液净化工艺,其出口铜离子浓度波动大、人工调控难度高,易造成后续硫化单元处理负荷剧增及铜砷共沉淀产废量增大,而传统预测模型存在不可解释、稳态限制、低泛化能力等缺陷。为此,构建了企业电积生产过程电压... 电积是目前最为常用的铜电解液净化工艺,其出口铜离子浓度波动大、人工调控难度高,易造成后续硫化单元处理负荷剧增及铜砷共沉淀产废量增大,而传统预测模型存在不可解释、稳态限制、低泛化能力等缺陷。为此,构建了企业电积生产过程电压及出液铜离子浓度准确预测的多参数模型。通过对比研究10种机器学习模型,发现GBR在电压预测中表现最优(决定系数R^(2)=0.79,均方误差MSE=1.25),XGBoost对出液铜离子浓度的预测准确度最高(R^(2)=0.87,MSE=5.58)。SHAP解释性分析表明,电流和时间分别是影响电压和出液铜离子浓度变化的主控因素。模型决策机制与电化学原理及质量守恒定律一致,突破了传统模型对非线性关系的表征局限,为异常工况的预警诊断、关键参数动态优化控制及减少污染物产生提供依据。 展开更多
关键词 铜电积 机器学习 Gradient Boosting Regression(GBR) eXtreme Gradient Boosting(XGBoost) 解释性分析 Shapley Additive explanations(SHAP)
在线阅读 下载PDF
上一页 1 2 14 下一页 到第
使用帮助 返回顶部