期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Investigating Black-Box Model for Wind Power Forecasting Using Local Interpretable Model-Agnostic Explanations Algorithm
1
作者 Mao Yang Chuanyu Xu +2 位作者 Yuying Bai Miaomiao Ma Xin Su 《CSEE Journal of Power and Energy Systems》 2025年第1期227-242,共16页
Wind power forecasting(WPF)is important for safe,stable,and reliable integration of new energy technologies into power systems.Machine learning(ML)algorithms have recently attracted increasing attention in the field o... Wind power forecasting(WPF)is important for safe,stable,and reliable integration of new energy technologies into power systems.Machine learning(ML)algorithms have recently attracted increasing attention in the field of WPF.However,opaque decisions and lack of trustworthiness of black-box models for WPF could cause scheduling risks.This study develops a method for identifying risky models in practical applications and avoiding the risks.First,a local interpretable model-agnostic explanations algorithm is introduced and improved for WPF model analysis.On that basis,a novel index is presented to quantify the level at which neural networks or other black-box models can trust features involved in training.Then,by revealing the operational mechanism for local samples,human interpretability of the black-box model is examined under different accuracies,time horizons,and seasons.This interpretability provides a basis for several technical routes for WPF from the viewpoint of the forecasting model.Moreover,further improvements in accuracy of WPF are explored by evaluating possibilities of using interpretable ML models that use multi-horizons global trust modeling and multi-seasons interpretable feature selection methods.Experimental results from a wind farm in China show that error can be robustly reduced. 展开更多
关键词 Black-box model correlation analysis feature trust index local interpretability local interpretable modelagnostic explanations(LIME) wind power forecasting
原文传递
A Deep Learning Framework for Heart Disease Prediction with Explainable Artificial Intelligence
2
作者 Muhammad Adil Nadeem Javaid +2 位作者 Imran Ahmed Abrar Ahmed Nabil Alrajeh 《Computers, Materials & Continua》 2026年第1期1944-1963,共20页
Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learni... Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learning(DL)approaches often face several limitations,including inefficient feature extraction,class imbalance,suboptimal classification performance,and limited interpretability,which collectively hinder their deployment in clinical settings.To address these challenges,we propose a novel DL framework for heart disease prediction that integrates a comprehensive preprocessing pipeline with an advanced classification architecture.The preprocessing stage involves label encoding and feature scaling.To address the issue of class imbalance inherent in the personal key indicators of the heart disease dataset,the localized random affine shadowsampling technique is employed,which enhances minority class representation while minimizing overfitting.At the core of the framework lies the Deep Residual Network(DeepResNet),which employs hierarchical residual transformations to facilitate efficient feature extraction and capture complex,non-linear relationships in the data.Experimental results demonstrate that the proposed model significantly outperforms existing techniques,achieving improvements of 3.26%in accuracy,3.16%in area under the receiver operating characteristics,1.09%in recall,and 1.07%in F1-score.Furthermore,robustness is validated using 10-fold crossvalidation,confirming the model’s generalizability across diverse data distributions.Moreover,model interpretability is ensured through the integration of Shapley additive explanations and local interpretable model-agnostic explanations,offering valuable insights into the contribution of individual features to model predictions.Overall,the proposed DL framework presents a robust,interpretable,and clinically applicable solution for heart disease prediction. 展开更多
关键词 Heart disease deep learning localized random affine shadowsampling local interpretable modelagnostic explanations shapley additive explanations 10-fold cross-validation
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部