期刊文献+
共找到91篇文章
< 1 2 5 >
每页显示 20 50 100
基于AutoGluon-XAI的铁路无砟轨道碳排放预测及特征分析
1
作者 鲍学英 韩通 《安全与环境学报》 北大核心 2025年第6期2431-2440,共10页
为从全生命周期角度分析铁路无砟轨道碳排放,研究量化不同阶段铁路无砟轨道碳排放的差异,并提出一种自动机器框架与可解释人工智能(Explainable Artificial Intelligence, XAI)相结合的无砟轨道碳排放预测及特征分析模型。首先,划分无... 为从全生命周期角度分析铁路无砟轨道碳排放,研究量化不同阶段铁路无砟轨道碳排放的差异,并提出一种自动机器框架与可解释人工智能(Explainable Artificial Intelligence, XAI)相结合的无砟轨道碳排放预测及特征分析模型。首先,划分无砟轨道全生命周期不同阶段,构建无砟轨道全生命周期碳排放计算系统;其次,选择AutoGluon自动机器学习模型,筛选特征变量,进行全局特征排序,结合XAI进行局部解释;最后,以某铁路无砟轨道为例进行研究。结果显示,全生命周期碳排放量为124.48万t。其中,工程物化阶段、运营维护阶段、拆除阶段的碳排放占比分别为42.84%、57.11%、0.05%,人工、材料、机械产生的排放占比分别为0.7%、95.8%、3.5%。AutoGluon-XAI模型结果表明,相比于随机森林等传统机器学习模型,AutoGluon预测精准度更高,综合性能最优;在全局解释中,重要性排前4的特征变量为更新周期、地段类型、轨道结构类型、地基条件,均为影响无砟轨道全生命周期碳排放的重要因素;在局部解释中,分类变量的不同特征呈现出不同的贡献效应,路基地段、板式无砟轨道等分类特征对碳排放正向促进效应较显著,而隧道地段、石质地基等分类特征则对碳排放负向抑制效应较显著。 展开更多
关键词 环境工程学 无砟轨道 碳排放预测 AutoGluon-xai模型 特征分析
原文传递
XAI背景下司法人工智能的可解释性义务研究——基于司法真诚的理论视角
2
作者 冯玉军 沈鸿艺 《北京航空航天大学学报(社会科学版)》 2025年第4期29-41,共13页
面对人工智能的“黑箱”问题,可解释人工智能(XAI)被认为是增强司法裁判领域可解释性的有效工具。然而,XAI中的可解释性含义反而背离了司法裁判领域的可解释性要求。从司法真诚的理论视角切入可解释性,可以根据“是否考量法官主观状态... 面对人工智能的“黑箱”问题,可解释人工智能(XAI)被认为是增强司法裁判领域可解释性的有效工具。然而,XAI中的可解释性含义反而背离了司法裁判领域的可解释性要求。从司法真诚的理论视角切入可解释性,可以根据“是否考量法官主观状态”的标准将司法真诚的含义分为主观真诚与客观真诚两个层面,来实现以合理理由指引行动的司法价值。研究发现,XAI事后解释的特性不符合主观真诚的要求导致其背离了司法价值,而可说明人工智能(IAI)可以符合主观层面司法真诚的要求。因此,应根据《中华人民共和国个人信息保护法》中关于自动化决策的说明义务规定,具体化司法人工智能部署者、提供者以及个案裁判法官的说明义务,保证司法裁判的关键环节使用IAI而非XAI,从而实现司法真诚的核心要求。 展开更多
关键词 司法人工智能 可解释性 司法真诚 可解释人工智能(xai) 可说明人工智能(IAI)
在线阅读 下载PDF
An Attention-Based CNN Framework for Alzheimer’s Disease Staging with Multi-Technique XAI Visualization
3
作者 Mustafa Lateef Fadhil Jumaili Emrullah Sonuç 《Computers, Materials & Continua》 2025年第5期2947-2969,共23页
Alzheimer’s disease(AD)is a significant challenge in modern healthcare,with early detection and accurate staging remaining critical priorities for effective intervention.While Deep Learning(DL)approaches have shown p... Alzheimer’s disease(AD)is a significant challenge in modern healthcare,with early detection and accurate staging remaining critical priorities for effective intervention.While Deep Learning(DL)approaches have shown promise in AD diagnosis,existing methods often struggle with the issues of precision,interpretability,and class imbalance.This study presents a novel framework that integrates DL with several eXplainable Artificial Intelligence(XAI)techniques,in particular attention mechanisms,Gradient-Weighted Class Activation Mapping(Grad-CAM),and Local Interpretable Model-Agnostic Explanations(LIME),to improve bothmodel interpretability and feature selection.The study evaluates four different DL architectures(ResMLP,VGG16,Xception,and Convolutional Neural Network(CNN)with attention mechanism)on a balanced dataset of 3714 MRI brain scans from patients aged 70 and older.The proposed CNN with attention model achieved superior performance,demonstrating 99.18%accuracy on the primary dataset and 96.64% accuracy on the ADNI dataset,significantly advancing the state-of-the-art in AD classification.The ability of the framework to provide comprehensive,interpretable results through multiple visualization techniques while maintaining high classification accuracy represents a significant advancement in the computational diagnosis of AD,potentially enabling more accurate and earlier intervention in clinical settings. 展开更多
关键词 Alzheimer’s disease deep learning early disease detection xai medical image classification
在线阅读 下载PDF
从特斯拉到xAI,马斯克拼出“多行星未来”蓝图
4
作者 于远航 《太空探索》 2025年第9期28-33,共6页
今年7月,全球科技界迎来一则重磅消息:马斯克旗下的太空探索技术公司豪掷20亿美元,入股人工智能企业xAI。这一投资行为不仅是太空探索技术公司首次大规模涉足“外部”AI领域,更被视为马斯克构建商业帝国生态的重要一步。通过垂直整合旗... 今年7月,全球科技界迎来一则重磅消息:马斯克旗下的太空探索技术公司豪掷20亿美元,入股人工智能企业xAI。这一投资行为不仅是太空探索技术公司首次大规模涉足“外部”AI领域,更被视为马斯克构建商业帝国生态的重要一步。通过垂直整合旗下特斯拉、太空探索技术公司、xAI等企业资源,马斯克正试图打破行业壁垒,将人工智能、航天技术与能源创新融为一体,实现多领域协同促进发展。 展开更多
关键词 马斯克 xai 特斯拉 太空探索技术公司 人工智能
在线阅读 下载PDF
An IoT-Enabled Hybrid DRL-XAI Framework for Transparent Urban Water Management
5
作者 Qamar H.Naith H.Mancy 《Computer Modeling in Engineering & Sciences》 2025年第7期387-405,共19页
Effective water distribution and transparency are threatened with being outrightly undermined unless the good name of urban infrastructure is maintained.With improved control systems in place to check leakage,variabil... Effective water distribution and transparency are threatened with being outrightly undermined unless the good name of urban infrastructure is maintained.With improved control systems in place to check leakage,variability of pressure,and conscientiousness of energy,issues that previously went unnoticed are now becoming recognized.This paper presents a grandiose hybrid framework that combines Multi-Agent Deep Reinforcement Learning(MADRL)with Shapley Additive Explanations(SHAP)-based Explainable AI(XAI)for adaptive and interpretable water resource management.In the methodology,the agents perform decentralized learning of the control policies for the pumps and valves based on the real-time network states,while also providing human-understandable explanations of the agents’decisions,using SHAP.This framework has been validated on five very diverse datasets,three of which are real-world scenarios involving actual water consumption from NYC and Alicante,with the other two being simulationbased standards such as LeakDB and the Water Distribution System Anomaly(WDSA)network.Empirical results demonstrate that the MADRL SHAP hybrid system reduces water loss by up to 32%,improves energy efficiency by+up to 25%,and maintains pressure stability between 91%and 93%,thereby outperforming the traditional rule-based control,single-agent DRL(Deep Reinforcement Learning),and XGBoost SHAP baselines.Furthermore,SHAP-based+interpretation brings transparency to the proposed model,with the average explanation consistency for all prediction models reaching 88%,thus further reinforcing the trustworthiness of the system on which the decision-making is based and empowering the utility operators to derive actionable insights from the model.The proposed framework addresses the critical challenges of smart water distribution. 展开更多
关键词 Multi-Agent reinforcement learning explainable artificial intelligence(xai) SHAP(Shapley Additive Explanations) smart water distribution urban infrastructure Internet of Things(IoT) water resource optimization energy efficient control
在线阅读 下载PDF
6G与XAI的融合 被引量:1
6
作者 王思野 高荦雨 +2 位作者 赵中原 赵雪莹 徐文波 《移动通信》 2024年第8期111-117,共7页
在未来6G网络的发展中,AI被视为关键驱动力,然而其“黑箱”性质与传统通信系统的强可解释性形成了明显冲突。通过分析AI在通信领域中的应用以及AI系统可解释性的重要性,深入探讨引入可提供透明度和可解释性的XAI技术的必要性和可行性。... 在未来6G网络的发展中,AI被视为关键驱动力,然而其“黑箱”性质与传统通信系统的强可解释性形成了明显冲突。通过分析AI在通信领域中的应用以及AI系统可解释性的重要性,深入探讨引入可提供透明度和可解释性的XAI技术的必要性和可行性。涵盖智能感知层的数据收集、数据挖掘层的特征工程、智能控制层的资源分配和网络管理决策,以及智能应用层的用户交互和AI决策解释。XAI技术有望显著提高6G网络智能化应用的透明度和可解释性,但其实现和应用仍面临多样性和复杂性挑战。未来研究需进一步探索XAI的潜力,特别是在通信领域的深层应用方法,使得人工智能落地应用与通信系统形成“强强耦合”,进一步促进通信系统的全面智能化发展。 展开更多
关键词 6G AI xai
在线阅读 下载PDF
Reverse Analysis Method and Process for Improving Malware Detection Based on XAI Model 被引量:1
7
作者 Ki-Pyoung Ma Dong-Ju Ryu Sang-Joon Lee 《Computers, Materials & Continua》 SCIE EI 2024年第12期4485-4502,共18页
With the advancements in artificial intelligence(AI)technology,attackers are increasingly using sophisticated techniques,including ChatGPT.Endpoint Detection&Response(EDR)is a system that detects and responds to s... With the advancements in artificial intelligence(AI)technology,attackers are increasingly using sophisticated techniques,including ChatGPT.Endpoint Detection&Response(EDR)is a system that detects and responds to strange activities or security threats occurring on computers or endpoint devices within an organization.Unlike traditional antivirus software,EDR is more about responding to a threat after it has already occurred than blocking it.This study aims to overcome challenges in security control,such as increased log size,emerging security threats,and technical demands faced by control staff.Previous studies have focused on AI detection models,emphasizing detection rates and model performance.However,the underlying reasons behind the detection results were often insufficiently understood,leading to varying outcomes based on the learning model.Additionally,the presence of both structured or unstructured logs,the growth in new security threats,and increasing technical disparities among control staff members pose further challenges for effective security control.This study proposed to improve the problems of the existing EDR system and overcome the limitations of security control.This study analyzed data during the preprocessing stage to identify potential threat factors that influence the detection process and its outcomes.Additionally,eleven commonly-used machine learning(ML)models for malware detection in XAI were tested,with the five models showing the highest performance selected for further analysis.Explainable AI(XAI)techniques are employed to assess the impact of preprocessing on the learning process outcomes.To ensure objectivity and versatility in the analysis,five widely recognized datasets were used.Additionally,eleven commonly-used machine learning models for malware detection in XAI were tested with the five models showing the highest performance selected for further analysis.The results indicate that eXtreme Gradient Boosting(XGBoost)model outperformed others.Moreover,the study conducts an in-depth analysis of the preprocessing phase,tracing backward from the detection result to infer potential threats and classify the primary variables influencing the model’s prediction.This analysis includes the application of SHapley Additive exPlanations(SHAP),an XAI result,which provides insight into the influence of specific features on detection outcomes,and suggests potential breaches by identifying common parameters in malware through file backtracking and providing weights.This study also proposed a counter-detection analysis process to overcome the limitations of existing Deep Learning outcomes,understand the decision-making process of AI,and enhance reliability.These contributions are expected to significantly enhance EDR systems and address existing limitations in security control. 展开更多
关键词 Endpoint detection&response(EDR) explainable AI(xai) SHapley Additive exPlanations(SHAP) reverse xai machine learning(ML)
在线阅读 下载PDF
What-If XAI Framework (WiXAI): From Counterfactuals towards Causal Understanding
8
作者 Neelabh Kshetry Mehmed Kantardzic 《Journal of Computer and Communications》 2024年第6期169-198,共30页
People learn causal relations since childhood using counterfactual reasoning. Counterfactual reasoning uses counterfactual examples which take the form of “what if this has happened differently”. Counterfactual exam... People learn causal relations since childhood using counterfactual reasoning. Counterfactual reasoning uses counterfactual examples which take the form of “what if this has happened differently”. Counterfactual examples are also the basis of counterfactual explanation in explainable artificial intelligence (XAI). However, a framework that relies solely on optimization algorithms to find and present counterfactual samples cannot help users gain a deeper understanding of the system. Without a way to verify their understanding, the users can even be misled by such explanations. Such limitations can be overcome through an interactive and iterative framework that allows the users to explore their desired “what-if” scenarios. The purpose of our research is to develop such a framework. In this paper, we present our “what-if” XAI framework (WiXAI), which visualizes the artificial intelligence (AI) classification model from the perspective of the user’s sample and guides their “what-if” exploration. We also formulated how to use the WiXAI framework to generate counterfactuals and understand the feature-feature and feature-output relations in-depth for a local sample. These relations help move the users toward causal understanding. 展开更多
关键词 xai AI Wixai Causal Understanding COUNTERFACTUALS Counterfactual Explanation
在线阅读 下载PDF
四川盆地LNG尾气提XAI潜力探讨 被引量:1
9
作者 贺广庆 宋煜 +3 位作者 巨龙 王潇 杜宣叶 何方雨 《油气田地面工程》 2024年第2期22-25,32,共5页
随着俄乌战争影响,XAI(氦气)价格持续上扬,国内XAI 90%以上依赖于进口。四川盆地致密气、页岩气含有微量伴生XAI,在LNG工厂尾气闪蒸气(BOG)中XAI含量较高,BOG中粗XAI通过深冷法膜分离、变压吸附等工艺,可制得高纯XAI产品。通过整合四川... 随着俄乌战争影响,XAI(氦气)价格持续上扬,国内XAI 90%以上依赖于进口。四川盆地致密气、页岩气含有微量伴生XAI,在LNG工厂尾气闪蒸气(BOG)中XAI含量较高,BOG中粗XAI通过深冷法膜分离、变压吸附等工艺,可制得高纯XAI产品。通过整合四川盆地LNG工厂尾气中可利用的XAI资源,对脱氢+变压吸附+低温精制工艺在经济适用性方面的研究论证,认为该工艺经济可行,符合国家支持天然气清洁能源利用的政策,可为国家重大战略工程提供XAI资源技术支撑。 展开更多
关键词 xai LNG尾气 变压吸附 市场潜力 经济评价
在线阅读 下载PDF
XAI架构探索与实践 被引量:2
10
作者 夏正勋 唐剑飞 +4 位作者 杨一帆 罗圣美 张燕 谭锋镭 谭圣儒 《大数据》 2024年第1期86-109,共24页
可解释AI(explainable AI,XAI)是可信AI技术的重要组成。当前,业界对XAI的技术点展开了深入的研究,但在工程化实施方面尚缺少系统性研究。提出了一种通用的XAI技术架构,从原子解释生成、核心能力增强、业务组件嵌入、可信解释应用4个方... 可解释AI(explainable AI,XAI)是可信AI技术的重要组成。当前,业界对XAI的技术点展开了深入的研究,但在工程化实施方面尚缺少系统性研究。提出了一种通用的XAI技术架构,从原子解释生成、核心能力增强、业务组件嵌入、可信解释应用4个方面入手,设计了XAI基础能力层、XAI核心能力层、XAI业务组件层、XAI应用层4个层级,通过各技术层之间的分工协作,XAI工程化的落地实施得到了全流程保障。基于该XAI架构,可以灵活地引入新的技术模块,支撑XAI的产业化应用,为XAI在行业中的推广提供了一定的参考。 展开更多
关键词 可解释AI 可信AI xai架构
在线阅读 下载PDF
Explainable Artificial Intelligence(XAI)Model for Cancer Image Classification
11
作者 Amit Singhal Krishna Kant Agrawal +3 位作者 Angeles Quezada Adrian Rodriguez Aguiñaga Samantha Jiménez Satya Prakash Yadav 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期401-441,共41页
The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and ... The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and that healthcare workers understand the decisions made by these algorithms.These models can potentially enhance interpretability and explainability in decision-making processes that rely on artificial intelligence.Nevertheless,the intricate nature of the healthcare field necessitates the utilization of sophisticated models to classify cancer images.This research presents an advanced investigation of XAI models to classify cancer images.It describes the different levels of explainability and interpretability associated with XAI models and the challenges faced in deploying them in healthcare applications.In addition,this study proposes a novel framework for cancer image classification that incorporates XAI models with deep learning and advanced medical imaging techniques.The proposed model integrates several techniques,including end-to-end explainable evaluation,rule-based explanation,and useradaptive explanation.The proposed XAI reaches 97.72%accuracy,90.72%precision,93.72%recall,96.72%F1-score,9.55%FDR,9.66%FOR,and 91.18%DOR.It will discuss the potential applications of the proposed XAI models in the smart healthcare environment.It will help ensure trust and accountability in AI-based decisions,which is essential for achieving a safe and reliable smart healthcare environment. 展开更多
关键词 Explainable artificial intelligence artificial intelligence xai healthcare CANCER image classification
在线阅读 下载PDF
基于可解释人工智能(XAI)的热带气旋直接经济损失评估研究
12
作者 刘淑贤 刘扬 +2 位作者 杨琨 张立生 张源达 《热带气象学报》 CSCD 北大核心 2024年第6期943-953,共11页
可解释人工智能(eXplainable Artificial Intelligence,XAI)已经成为人工智能研究领域的重要发展方向,该技术可以帮助解释模型如何做出预测和决策,在气象灾害评估领域具有较大应用价值。本研究旨在利用机器学习算法评估热带气旋(Tropica... 可解释人工智能(eXplainable Artificial Intelligence,XAI)已经成为人工智能研究领域的重要发展方向,该技术可以帮助解释模型如何做出预测和决策,在气象灾害评估领域具有较大应用价值。本研究旨在利用机器学习算法评估热带气旋(Tropical Cyclone,TC)的直接经济损失,并采用XAI方法SHAP(SHapley Additive exPlanations),从全局和局部层面分析特征因素对模型预测的影响和贡献。结果表明,随机森林(Random Forest,RF)模型在均方根误差、平均绝对误差和决定系数这三个评估指标中均优于LightGBM(Light Gradient Boosting Machine)模型,指标值分别达到了23.6、11.1和0.9。根据SHAP值,RF模型中最重要的三个因素分别是极大风速、最大日雨量和暴雨站点比例。具体而言,当样本的极大风速值大于45 m·s^(-1)、最大日雨量值超过250 mm以及暴雨站点比例高于30%时,往往对TC直接经济损失预测值产生较大的正贡献。该研究可以为决策者制定灾害风险管理策略提供有力的科学依据和理论支持。 展开更多
关键词 热带气旋 直接经济损失 机器学习 可解释人工智能 SHAP
在线阅读 下载PDF
基于XAI实现的应用识别及分类的方法 被引量:1
13
作者 陈晖 唐勇 《制造业自动化》 北大核心 2012年第18期23-25,共3页
随着网络的发展,客户对应用层识别和管理的需求越来越多,传统的DPI技术已经无法满足客户的需求。在这种背景下,本文提出了下一代应用识别与分析技术XAI,并基于XAI技术设计实现了针对共享类应用的细分识别和基于字符序列多维度图的特征... 随着网络的发展,客户对应用层识别和管理的需求越来越多,传统的DPI技术已经无法满足客户的需求。在这种背景下,本文提出了下一代应用识别与分析技术XAI,并基于XAI技术设计实现了针对共享类应用的细分识别和基于字符序列多维度图的特征库存储。 展开更多
关键词 应用识别 xai技术 分类 特征库
在线阅读 下载PDF
基于SHAP XAI的高过载永磁同步电机优化设计 被引量:2
14
作者 李程 韩雪岩 朱龙飞 《电工技术》 2022年第11期194-200,共7页
提出了一种机器人高过载无框伺服电机方案,首先从表贴式永磁同步电机的电压转矩方程出发,探讨了影响表贴式永磁同步电机过载能力的因素,通过SHAPLEY模型可解释性框架对主要影响电机过载能力的永磁体厚度、气隙长度、极弧系数、定子裂比... 提出了一种机器人高过载无框伺服电机方案,首先从表贴式永磁同步电机的电压转矩方程出发,探讨了影响表贴式永磁同步电机过载能力的因素,通过SHAPLEY模型可解释性框架对主要影响电机过载能力的永磁体厚度、气隙长度、极弧系数、定子裂比等设计因素进行分析,得到上述设计参数对电机过载能力贡献度随不同设计方案和自身数值改变的变化规律,然后利用该模型对科尔摩根TMB(S)-6051-A无框永磁同步电机进行优化设计,最终使电机过载能力达到4.6倍。 展开更多
关键词 高过载 表贴式永磁同步电机 无框电机 可解释性人工智能 SHAPLEY模型解释性框架
在线阅读 下载PDF
Explaining deep neural network models for electricity price forecasting with XAI
15
作者 Antoine Pesenti Aidan O’Sullivan 《Energy and AI》 2025年第3期202-213,共12页
Electricity markets are highly complex,involving lots of interactions and complex dependencies that make it hard to understand the inner workings of the market and what is driving prices.Econometric methods have been ... Electricity markets are highly complex,involving lots of interactions and complex dependencies that make it hard to understand the inner workings of the market and what is driving prices.Econometric methods have been developed for this,white-box models,however,they are not as powerful as deep neural network models(DNN).In this paper,we use a DNN to forecast the price and then use XAI methods to understand the factors driving the price dynamics in the market.The objective is to increase our understanding of how different electricity markets work.To do that,we apply explainable methods such as SHAP and Gradient,combined with visual techniques like heatmaps(saliency maps)to analyse the behaviour and contributions of various features across five electricity markets.We introduce the novel concepts of SSHAP values and SSHAP lines to enhance the complex representation of high-dimensional tabular models. 展开更多
关键词 Electricity price forecasting EPF Explainable methods xai Explainable AI SHAP GRADIENT Saliency map
在线阅读 下载PDF
Explainable Artificial Intelligence(XAI)for Material Design and Engineering Applications:A Quantitative Computational Framework
16
作者 Bokai Liu Pengju Liu +1 位作者 Weizhuo Lu Thomas Olofsson 《International Journal of Mechanical System Dynamics》 2025年第2期236-265,共30页
The advancement of artificial intelligence(AI)in material design and engineering has led to significant improvements in predictive modeling of material properties.However,the lack of interpretability in machine learni... The advancement of artificial intelligence(AI)in material design and engineering has led to significant improvements in predictive modeling of material properties.However,the lack of interpretability in machine learning(ML)-based material informatics presents a major barrier to its practical adoption.This study proposes a novel quantitative computational framework that integrates ML models with explainable artificial intelligence(XAI)techniques to enhance both predictive accuracy and interpretability in material property prediction.The framework systematically incorporates a structured pipeline,including data processing,feature selection,model training,performance evaluation,explainability analysis,and real-world deployment.It is validated through a representative case study on the prediction of high-performance concrete(HPC)compressive strength,utilizing a comparative analysis of ML models such as Random Forest,XGBoost,Support Vector Regression(SVR),and Deep Neural Networks(DNNs).The results demonstrate that XGBoost achieves the highest predictive performance(R^(2)=0.918),while SHAP(Shapley Additive Explanations)and LIME(Local Interpretable Model-Agnostic Explanations)provide detailed insights into feature importance and material interactions.Additionally,the deployment of the trained model as a cloud-based Flask-Gunicorn API enables real-time inference,ensuring its scalability and accessibility for industrial and research applications.The proposed framework addresses key limitations of existing ML approaches by integrating advanced explainability techniques,systematically handling nonlinear feature interactions,and providing a scalable deployment strategy.This study contributes to the development of interpretable and deployable AI-driven material informatics,bridging the gap between data-driven predictions and fundamental material science principles. 展开更多
关键词 explainable artificial intelligence(xai) high-performance concrete material informatics predictive modeling science cloud deployment
原文传递
Toward Human-centered XAIin Practice:A survey 被引量:1
17
作者 Xiangwei Kong Shujie Liu Luhao Zhu 《Machine Intelligence Research》 EI CSCD 2024年第4期740-770,共31页
Human adoption of artificial intelligence(AI)technique is largely hampered because of the increasing complexity and opacity of AI development.Explainable AI(XAI)techniques with various methods and tools have been deve... Human adoption of artificial intelligence(AI)technique is largely hampered because of the increasing complexity and opacity of AI development.Explainable AI(XAI)techniques with various methods and tools have been developed to bridge this gap between high-performance black-box AI models and human understanding.However,the current adoption of XAI technique stil lacks"human-centered"guidance for designing proper solutions to meet different stakeholders'needs in XAI practice.We first summarize a human-centered demand framework to categorize different stakeholders into five key roles with specific demands by reviewing existing research and then extract six commonly used human-centered XAI evaluation measures which are helpful for validating the effect of XAI.In addition,a taxonomy of XAI methods is developed for visual computing with analysis of method properties.Holding clearer human demands and XAI methods in mind,we take a medical image diagnosis scenario as an example to present an overview of how extant XAI approaches for visual computing fulfil stakeholders'human-centered demands in practice.And we check the availability of open-source XAI tools for stakeholders'use.This survey provides further guidance for matching diverse human demands with appropriate XAI methods or tools in specific applications with a summary of main challenges and future work toward human-centered XAI in practice. 展开更多
关键词 Artificial intelligence(AI)application explainable AI(xai) human-centered design visual computing medical diagnosis.
原文传递
基于黑麂分布模型优化的钱江源园区管控分区合理性评估 被引量:1
18
作者 李知晓 吴承照 《生态学报》 北大核心 2025年第16期7753-7768,共16页
黑麂(Muntiacus crinifrons)为国家一级保护野生动物,是钱江源⁃百山祖国家公园候选区钱江源园区(简称钱江源园区)的旗舰物种,科学的管控分区对黑麂种群和栖息地的保护具有重要的意义。为了科学评价黑麂栖息地的适宜性,支撑其种群保护与... 黑麂(Muntiacus crinifrons)为国家一级保护野生动物,是钱江源⁃百山祖国家公园候选区钱江源园区(简称钱江源园区)的旗舰物种,科学的管控分区对黑麂种群和栖息地的保护具有重要的意义。为了科学评价黑麂栖息地的适宜性,支撑其种群保护与栖息地恢复,引入堆叠泛化集成算法(Stacked generalization-based ensemble algorithm,Stacking)来优化物种分布模型(Species distribution model,SDM);采用特征重要性分析、夏普雷加性解释(SHapley Additive exPlanations,SHAP)和部分依赖图(Partial dependence plots,PDPs)等可解释人工智能(Explainable artificial intelligence,XAI)方法,来揭示影响黑麂栖息地适宜性的主要因素及其影响机制,提升SDM的透明度;将预测结果与钱江源园区的管控分区、道路和村镇居民点分布叠加分析,从而识别人与黑麂的潜在空间冲突,以评估管控分区在黑麂保护中的合理性。结果显示,钱江源园区的黑麂适宜栖息地集中在南部和西部,约39.49 km^(2),占总面积的15.67%,其中81.29%位于核心保护区,表明管控分区基本合理。核心保护区中适宜栖息地净损失约1.77 km^(2),占适宜栖息地净损失总面积的57.1%。人与黑麂冲突集中在北部的核心保护区,表明就地保护、植被修复或原住民搬迁难以满足保护需求,管控措施亟待优化。降水、温度、海拔和人为干扰是影响黑麂栖息地适宜性的主要因素,在降水量偏高、海拔偏高、气温凉爽、低人为干扰的区域,栖息地的适宜性较高。建议通过加强物种监测与栖息地适宜性评估、实施适应性分区框架、设立缓冲区、构建生态廊道、制定气候变化预警方案等措施,来加强黑麂栖息地的整体保护,推动钱江源园区管理精细化转型。研究可为钱江源园区的黑麂保护和空间治理提供科学依据。 展开更多
关键词 国家公园规划与管理 物种分布模型(SDM) 机器学习(ML) 堆叠泛化集成算法(Stacking) 可解释人工智能(xai)
在线阅读 下载PDF
评估可解释的人工智能技术在多种成像方式下解释乳腺癌诊断的有效性
19
作者 李禄 罗浩军 《临床医学进展》 2025年第2期1503-1512,共10页
乳腺癌持续位居全球女性癌症发病与致死的主要原因之列。早期且精确的诊断对于优化患者预后具有举足轻重的地位。乳房X线摄影、超声检查及磁共振成像(Magnetic Resonance Imaging, MRI)等影像学技术在乳腺癌的诊断中扮演着至关重要的角... 乳腺癌持续位居全球女性癌症发病与致死的主要原因之列。早期且精确的诊断对于优化患者预后具有举足轻重的地位。乳房X线摄影、超声检查及磁共振成像(Magnetic Resonance Imaging, MRI)等影像学技术在乳腺癌的诊断中扮演着至关重要的角色。然而,这些技术手段面临着准确性波动、操作员依赖性显著及结果阐释困难等多重挑战。在此背景下,人工智能(Artificial Intelligence, AI),尤其是可解释人工智能(Explainable Artificial Intelligence, XAI)的融入,已成为提升诊断精确度及增强信任度的革命性途径。本综述聚焦于XAI技术在乳腺癌诊断领域内,于不同成像模式中的应用效果比较。深入探讨了核心的XAI方法,诸如Shapley加性解释(SHAP)、局部可解释模型无关解释(LIME)以及基于梯度的类激活映射(Grad-CAM),着重阐述了它们在增进模型可解释性及提升临床实用性方面的具体成效。综述不仅分析了XAI技术在乳房X线摄影、超声及MRI应用中的优势与局限,还特别强调了其在提高AI辅助预测透明度方面的贡献。此外,本文亦评估了XAI在应对假阳性、假阴性问题以及多模态成像数据整合挑战中的效能。该评论的核心价值在于,它全面剖析了XAI在缩小AI技术进展与临床实际应用之间鸿沟的潜力。通过提升透明度,XAI技术能够增强临床医生对AI的信任度,促进其更顺畅地融入诊断工作流程,从而助力个性化医疗实践的推进及患者治疗成效的改善。综上所述,尽管XAI在提升AI模型可解释性与准确性方面取得了显著进展,但在计算复杂度控制、普遍适用性拓展及临床接纳度提升等方面仍面临诸多挑战。未来研究应着重于优化XAI方法、促进跨学科间的深度合作,并开发标准化的框架体系,以确保XAI技术能在多样化的临床环境中实现可扩展性与可靠性的双重提升。Breast cancer remains one of the leading causes of cancer incidence and mortality among women worldwide. Early and accurate diagnosis plays a pivotal role in optimizing patient prognosis. Imaging techniques such as mammography, ultrasound, and magnetic resonance imaging (MRI) play crucial roles in the diagnosis of breast cancer. However, these techniques face multiple challenges, including accuracy fluctuations, significant operator dependency, and difficulties in result interpretation. In this context, the integration of Artificial Intelligence (AI), especially Explainable Artificial Intelligence (XAI), has become a revolutionary approach to improving diagnostic accuracy and enhancing trust. This review focuses on the comparative application of XAI technologies across different imaging modalities in breast cancer diagnosis. It delves into core XAI methods such as Shapley Additive Explanations (SHAP), Local Interpretable Model-Agnostic Explanations (LIME), and Gradient-weighted Class Activation Mapping (Grad-CAM), with an emphasis on their effectiveness in enhancing model interpretability and improving clinical utility. The review analyzes not only the advantages and limitations of XAI in mammography, ultrasound, and MRI applications but also highlights its contribution to increasing the transparency of AI-assisted predictions. Additionally, the review evaluates the performance of XAI in addressing issues related to false positives, false negatives, and the challenges of multimodal imaging data integration. The core value of this review lies in its comprehensive analysis of the potential of XAI in bridging the gap between advancements in AI technology and clinical application. By enhancing transparency, XAI can boost clinicians’ trust in AI, facilitating its smoother integration into diagnostic workflows, thereby promoting personalized medical practices and improving patient treatment outcomes. In conclusion, despite significant progress made by XAI in improving AI model interpretability and accuracy, challenges remain in terms of computational complexity, general applicability, and clinical acceptance. Future research should focus on optimizing XAI methods, fostering interdisciplinary collaboration, and developing standardized frameworks to ensure the scalability and reliability of XAI technologies in diverse clinical environments. 展开更多
关键词 乳腺癌诊断 可解释人工智能(xai) SHAP LIME Grad-CAM 影像学技术 个性化医疗 人工智能
暂未订购
Intrumer:A Multi Module Distributed Explainable IDS/IPS for Securing Cloud Environment
20
作者 Nazreen Banu A S.K.B.Sangeetha 《Computers, Materials & Continua》 SCIE EI 2025年第1期579-607,共29页
The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic.Cloud environments pose significant challenges in maintaining privacy and security.Global approach... The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic.Cloud environments pose significant challenges in maintaining privacy and security.Global approaches,such as IDS,have been developed to tackle these issues.However,most conventional Intrusion Detection System(IDS)models struggle with unseen cyberattacks and complex high-dimensional data.In fact,this paper introduces the idea of a novel distributed explainable and heterogeneous transformer-based intrusion detection system,named INTRUMER,which offers balanced accuracy,reliability,and security in cloud settings bymultiplemodulesworking together within it.The traffic captured from cloud devices is first passed to the TC&TM module in which the Falcon Optimization Algorithm optimizes the feature selection process,and Naie Bayes algorithm performs the classification of features.The selected features are classified further and are forwarded to the Heterogeneous Attention Transformer(HAT)module.In this module,the contextual interactions of the network traffic are taken into account to classify them as normal or malicious traffic.The classified results are further analyzed by the Explainable Prevention Module(XPM)to ensure trustworthiness by providing interpretable decisions.With the explanations fromthe classifier,emergency alarms are transmitted to nearby IDSmodules,servers,and underlying cloud devices for the enhancement of preventive measures.Extensive experiments on benchmark IDS datasets CICIDS 2017,Honeypots,and NSL-KDD were conducted to demonstrate the efficiency of the INTRUMER model in detecting network trafficwith high accuracy for different types.Theproposedmodel outperforms state-of-the-art approaches,obtaining better performance metrics:98.7%accuracy,97.5%precision,96.3%recall,and 97.8%F1-score.Such results validate the robustness and effectiveness of INTRUMER in securing diverse cloud environments against sophisticated cyber threats. 展开更多
关键词 Cloud computing intrusion detection system TRANSFORMERS and explainable artificial intelligence(xai)
在线阅读 下载PDF
上一页 1 2 5 下一页 到第
使用帮助 返回顶部