Alzheimer’s disease(AD)is a significant challenge in modern healthcare,with early detection and accurate staging remaining critical priorities for effective intervention.While Deep Learning(DL)approaches have shown p...Alzheimer’s disease(AD)is a significant challenge in modern healthcare,with early detection and accurate staging remaining critical priorities for effective intervention.While Deep Learning(DL)approaches have shown promise in AD diagnosis,existing methods often struggle with the issues of precision,interpretability,and class imbalance.This study presents a novel framework that integrates DL with several eXplainable Artificial Intelligence(XAI)techniques,in particular attention mechanisms,Gradient-Weighted Class Activation Mapping(Grad-CAM),and Local Interpretable Model-Agnostic Explanations(LIME),to improve bothmodel interpretability and feature selection.The study evaluates four different DL architectures(ResMLP,VGG16,Xception,and Convolutional Neural Network(CNN)with attention mechanism)on a balanced dataset of 3714 MRI brain scans from patients aged 70 and older.The proposed CNN with attention model achieved superior performance,demonstrating 99.18%accuracy on the primary dataset and 96.64% accuracy on the ADNI dataset,significantly advancing the state-of-the-art in AD classification.The ability of the framework to provide comprehensive,interpretable results through multiple visualization techniques while maintaining high classification accuracy represents a significant advancement in the computational diagnosis of AD,potentially enabling more accurate and earlier intervention in clinical settings.展开更多
Effective water distribution and transparency are threatened with being outrightly undermined unless the good name of urban infrastructure is maintained.With improved control systems in place to check leakage,variabil...Effective water distribution and transparency are threatened with being outrightly undermined unless the good name of urban infrastructure is maintained.With improved control systems in place to check leakage,variability of pressure,and conscientiousness of energy,issues that previously went unnoticed are now becoming recognized.This paper presents a grandiose hybrid framework that combines Multi-Agent Deep Reinforcement Learning(MADRL)with Shapley Additive Explanations(SHAP)-based Explainable AI(XAI)for adaptive and interpretable water resource management.In the methodology,the agents perform decentralized learning of the control policies for the pumps and valves based on the real-time network states,while also providing human-understandable explanations of the agents’decisions,using SHAP.This framework has been validated on five very diverse datasets,three of which are real-world scenarios involving actual water consumption from NYC and Alicante,with the other two being simulationbased standards such as LeakDB and the Water Distribution System Anomaly(WDSA)network.Empirical results demonstrate that the MADRL SHAP hybrid system reduces water loss by up to 32%,improves energy efficiency by+up to 25%,and maintains pressure stability between 91%and 93%,thereby outperforming the traditional rule-based control,single-agent DRL(Deep Reinforcement Learning),and XGBoost SHAP baselines.Furthermore,SHAP-based+interpretation brings transparency to the proposed model,with the average explanation consistency for all prediction models reaching 88%,thus further reinforcing the trustworthiness of the system on which the decision-making is based and empowering the utility operators to derive actionable insights from the model.The proposed framework addresses the critical challenges of smart water distribution.展开更多
With the advancements in artificial intelligence(AI)technology,attackers are increasingly using sophisticated techniques,including ChatGPT.Endpoint Detection&Response(EDR)is a system that detects and responds to s...With the advancements in artificial intelligence(AI)technology,attackers are increasingly using sophisticated techniques,including ChatGPT.Endpoint Detection&Response(EDR)is a system that detects and responds to strange activities or security threats occurring on computers or endpoint devices within an organization.Unlike traditional antivirus software,EDR is more about responding to a threat after it has already occurred than blocking it.This study aims to overcome challenges in security control,such as increased log size,emerging security threats,and technical demands faced by control staff.Previous studies have focused on AI detection models,emphasizing detection rates and model performance.However,the underlying reasons behind the detection results were often insufficiently understood,leading to varying outcomes based on the learning model.Additionally,the presence of both structured or unstructured logs,the growth in new security threats,and increasing technical disparities among control staff members pose further challenges for effective security control.This study proposed to improve the problems of the existing EDR system and overcome the limitations of security control.This study analyzed data during the preprocessing stage to identify potential threat factors that influence the detection process and its outcomes.Additionally,eleven commonly-used machine learning(ML)models for malware detection in XAI were tested,with the five models showing the highest performance selected for further analysis.Explainable AI(XAI)techniques are employed to assess the impact of preprocessing on the learning process outcomes.To ensure objectivity and versatility in the analysis,five widely recognized datasets were used.Additionally,eleven commonly-used machine learning models for malware detection in XAI were tested with the five models showing the highest performance selected for further analysis.The results indicate that eXtreme Gradient Boosting(XGBoost)model outperformed others.Moreover,the study conducts an in-depth analysis of the preprocessing phase,tracing backward from the detection result to infer potential threats and classify the primary variables influencing the model’s prediction.This analysis includes the application of SHapley Additive exPlanations(SHAP),an XAI result,which provides insight into the influence of specific features on detection outcomes,and suggests potential breaches by identifying common parameters in malware through file backtracking and providing weights.This study also proposed a counter-detection analysis process to overcome the limitations of existing Deep Learning outcomes,understand the decision-making process of AI,and enhance reliability.These contributions are expected to significantly enhance EDR systems and address existing limitations in security control.展开更多
People learn causal relations since childhood using counterfactual reasoning. Counterfactual reasoning uses counterfactual examples which take the form of “what if this has happened differently”. Counterfactual exam...People learn causal relations since childhood using counterfactual reasoning. Counterfactual reasoning uses counterfactual examples which take the form of “what if this has happened differently”. Counterfactual examples are also the basis of counterfactual explanation in explainable artificial intelligence (XAI). However, a framework that relies solely on optimization algorithms to find and present counterfactual samples cannot help users gain a deeper understanding of the system. Without a way to verify their understanding, the users can even be misled by such explanations. Such limitations can be overcome through an interactive and iterative framework that allows the users to explore their desired “what-if” scenarios. The purpose of our research is to develop such a framework. In this paper, we present our “what-if” XAI framework (WiXAI), which visualizes the artificial intelligence (AI) classification model from the perspective of the user’s sample and guides their “what-if” exploration. We also formulated how to use the WiXAI framework to generate counterfactuals and understand the feature-feature and feature-output relations in-depth for a local sample. These relations help move the users toward causal understanding.展开更多
The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and ...The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and that healthcare workers understand the decisions made by these algorithms.These models can potentially enhance interpretability and explainability in decision-making processes that rely on artificial intelligence.Nevertheless,the intricate nature of the healthcare field necessitates the utilization of sophisticated models to classify cancer images.This research presents an advanced investigation of XAI models to classify cancer images.It describes the different levels of explainability and interpretability associated with XAI models and the challenges faced in deploying them in healthcare applications.In addition,this study proposes a novel framework for cancer image classification that incorporates XAI models with deep learning and advanced medical imaging techniques.The proposed model integrates several techniques,including end-to-end explainable evaluation,rule-based explanation,and useradaptive explanation.The proposed XAI reaches 97.72%accuracy,90.72%precision,93.72%recall,96.72%F1-score,9.55%FDR,9.66%FOR,and 91.18%DOR.It will discuss the potential applications of the proposed XAI models in the smart healthcare environment.It will help ensure trust and accountability in AI-based decisions,which is essential for achieving a safe and reliable smart healthcare environment.展开更多
Electricity markets are highly complex,involving lots of interactions and complex dependencies that make it hard to understand the inner workings of the market and what is driving prices.Econometric methods have been ...Electricity markets are highly complex,involving lots of interactions and complex dependencies that make it hard to understand the inner workings of the market and what is driving prices.Econometric methods have been developed for this,white-box models,however,they are not as powerful as deep neural network models(DNN).In this paper,we use a DNN to forecast the price and then use XAI methods to understand the factors driving the price dynamics in the market.The objective is to increase our understanding of how different electricity markets work.To do that,we apply explainable methods such as SHAP and Gradient,combined with visual techniques like heatmaps(saliency maps)to analyse the behaviour and contributions of various features across five electricity markets.We introduce the novel concepts of SSHAP values and SSHAP lines to enhance the complex representation of high-dimensional tabular models.展开更多
The advancement of artificial intelligence(AI)in material design and engineering has led to significant improvements in predictive modeling of material properties.However,the lack of interpretability in machine learni...The advancement of artificial intelligence(AI)in material design and engineering has led to significant improvements in predictive modeling of material properties.However,the lack of interpretability in machine learning(ML)-based material informatics presents a major barrier to its practical adoption.This study proposes a novel quantitative computational framework that integrates ML models with explainable artificial intelligence(XAI)techniques to enhance both predictive accuracy and interpretability in material property prediction.The framework systematically incorporates a structured pipeline,including data processing,feature selection,model training,performance evaluation,explainability analysis,and real-world deployment.It is validated through a representative case study on the prediction of high-performance concrete(HPC)compressive strength,utilizing a comparative analysis of ML models such as Random Forest,XGBoost,Support Vector Regression(SVR),and Deep Neural Networks(DNNs).The results demonstrate that XGBoost achieves the highest predictive performance(R^(2)=0.918),while SHAP(Shapley Additive Explanations)and LIME(Local Interpretable Model-Agnostic Explanations)provide detailed insights into feature importance and material interactions.Additionally,the deployment of the trained model as a cloud-based Flask-Gunicorn API enables real-time inference,ensuring its scalability and accessibility for industrial and research applications.The proposed framework addresses key limitations of existing ML approaches by integrating advanced explainability techniques,systematically handling nonlinear feature interactions,and providing a scalable deployment strategy.This study contributes to the development of interpretable and deployable AI-driven material informatics,bridging the gap between data-driven predictions and fundamental material science principles.展开更多
Human adoption of artificial intelligence(AI)technique is largely hampered because of the increasing complexity and opacity of AI development.Explainable AI(XAI)techniques with various methods and tools have been deve...Human adoption of artificial intelligence(AI)technique is largely hampered because of the increasing complexity and opacity of AI development.Explainable AI(XAI)techniques with various methods and tools have been developed to bridge this gap between high-performance black-box AI models and human understanding.However,the current adoption of XAI technique stil lacks"human-centered"guidance for designing proper solutions to meet different stakeholders'needs in XAI practice.We first summarize a human-centered demand framework to categorize different stakeholders into five key roles with specific demands by reviewing existing research and then extract six commonly used human-centered XAI evaluation measures which are helpful for validating the effect of XAI.In addition,a taxonomy of XAI methods is developed for visual computing with analysis of method properties.Holding clearer human demands and XAI methods in mind,we take a medical image diagnosis scenario as an example to present an overview of how extant XAI approaches for visual computing fulfil stakeholders'human-centered demands in practice.And we check the availability of open-source XAI tools for stakeholders'use.This survey provides further guidance for matching diverse human demands with appropriate XAI methods or tools in specific applications with a summary of main challenges and future work toward human-centered XAI in practice.展开更多
乳腺癌持续位居全球女性癌症发病与致死的主要原因之列。早期且精确的诊断对于优化患者预后具有举足轻重的地位。乳房X线摄影、超声检查及磁共振成像(Magnetic Resonance Imaging, MRI)等影像学技术在乳腺癌的诊断中扮演着至关重要的角...乳腺癌持续位居全球女性癌症发病与致死的主要原因之列。早期且精确的诊断对于优化患者预后具有举足轻重的地位。乳房X线摄影、超声检查及磁共振成像(Magnetic Resonance Imaging, MRI)等影像学技术在乳腺癌的诊断中扮演着至关重要的角色。然而,这些技术手段面临着准确性波动、操作员依赖性显著及结果阐释困难等多重挑战。在此背景下,人工智能(Artificial Intelligence, AI),尤其是可解释人工智能(Explainable Artificial Intelligence, XAI)的融入,已成为提升诊断精确度及增强信任度的革命性途径。本综述聚焦于XAI技术在乳腺癌诊断领域内,于不同成像模式中的应用效果比较。深入探讨了核心的XAI方法,诸如Shapley加性解释(SHAP)、局部可解释模型无关解释(LIME)以及基于梯度的类激活映射(Grad-CAM),着重阐述了它们在增进模型可解释性及提升临床实用性方面的具体成效。综述不仅分析了XAI技术在乳房X线摄影、超声及MRI应用中的优势与局限,还特别强调了其在提高AI辅助预测透明度方面的贡献。此外,本文亦评估了XAI在应对假阳性、假阴性问题以及多模态成像数据整合挑战中的效能。该评论的核心价值在于,它全面剖析了XAI在缩小AI技术进展与临床实际应用之间鸿沟的潜力。通过提升透明度,XAI技术能够增强临床医生对AI的信任度,促进其更顺畅地融入诊断工作流程,从而助力个性化医疗实践的推进及患者治疗成效的改善。综上所述,尽管XAI在提升AI模型可解释性与准确性方面取得了显著进展,但在计算复杂度控制、普遍适用性拓展及临床接纳度提升等方面仍面临诸多挑战。未来研究应着重于优化XAI方法、促进跨学科间的深度合作,并开发标准化的框架体系,以确保XAI技术能在多样化的临床环境中实现可扩展性与可靠性的双重提升。Breast cancer remains one of the leading causes of cancer incidence and mortality among women worldwide. Early and accurate diagnosis plays a pivotal role in optimizing patient prognosis. Imaging techniques such as mammography, ultrasound, and magnetic resonance imaging (MRI) play crucial roles in the diagnosis of breast cancer. However, these techniques face multiple challenges, including accuracy fluctuations, significant operator dependency, and difficulties in result interpretation. In this context, the integration of Artificial Intelligence (AI), especially Explainable Artificial Intelligence (XAI), has become a revolutionary approach to improving diagnostic accuracy and enhancing trust. This review focuses on the comparative application of XAI technologies across different imaging modalities in breast cancer diagnosis. It delves into core XAI methods such as Shapley Additive Explanations (SHAP), Local Interpretable Model-Agnostic Explanations (LIME), and Gradient-weighted Class Activation Mapping (Grad-CAM), with an emphasis on their effectiveness in enhancing model interpretability and improving clinical utility. The review analyzes not only the advantages and limitations of XAI in mammography, ultrasound, and MRI applications but also highlights its contribution to increasing the transparency of AI-assisted predictions. Additionally, the review evaluates the performance of XAI in addressing issues related to false positives, false negatives, and the challenges of multimodal imaging data integration. The core value of this review lies in its comprehensive analysis of the potential of XAI in bridging the gap between advancements in AI technology and clinical application. By enhancing transparency, XAI can boost clinicians’ trust in AI, facilitating its smoother integration into diagnostic workflows, thereby promoting personalized medical practices and improving patient treatment outcomes. In conclusion, despite significant progress made by XAI in improving AI model interpretability and accuracy, challenges remain in terms of computational complexity, general applicability, and clinical acceptance. Future research should focus on optimizing XAI methods, fostering interdisciplinary collaboration, and developing standardized frameworks to ensure the scalability and reliability of XAI technologies in diverse clinical environments.展开更多
The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic.Cloud environments pose significant challenges in maintaining privacy and security.Global approach...The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic.Cloud environments pose significant challenges in maintaining privacy and security.Global approaches,such as IDS,have been developed to tackle these issues.However,most conventional Intrusion Detection System(IDS)models struggle with unseen cyberattacks and complex high-dimensional data.In fact,this paper introduces the idea of a novel distributed explainable and heterogeneous transformer-based intrusion detection system,named INTRUMER,which offers balanced accuracy,reliability,and security in cloud settings bymultiplemodulesworking together within it.The traffic captured from cloud devices is first passed to the TC&TM module in which the Falcon Optimization Algorithm optimizes the feature selection process,and Naie Bayes algorithm performs the classification of features.The selected features are classified further and are forwarded to the Heterogeneous Attention Transformer(HAT)module.In this module,the contextual interactions of the network traffic are taken into account to classify them as normal or malicious traffic.The classified results are further analyzed by the Explainable Prevention Module(XPM)to ensure trustworthiness by providing interpretable decisions.With the explanations fromthe classifier,emergency alarms are transmitted to nearby IDSmodules,servers,and underlying cloud devices for the enhancement of preventive measures.Extensive experiments on benchmark IDS datasets CICIDS 2017,Honeypots,and NSL-KDD were conducted to demonstrate the efficiency of the INTRUMER model in detecting network trafficwith high accuracy for different types.Theproposedmodel outperforms state-of-the-art approaches,obtaining better performance metrics:98.7%accuracy,97.5%precision,96.3%recall,and 97.8%F1-score.Such results validate the robustness and effectiveness of INTRUMER in securing diverse cloud environments against sophisticated cyber threats.展开更多
文摘Alzheimer’s disease(AD)is a significant challenge in modern healthcare,with early detection and accurate staging remaining critical priorities for effective intervention.While Deep Learning(DL)approaches have shown promise in AD diagnosis,existing methods often struggle with the issues of precision,interpretability,and class imbalance.This study presents a novel framework that integrates DL with several eXplainable Artificial Intelligence(XAI)techniques,in particular attention mechanisms,Gradient-Weighted Class Activation Mapping(Grad-CAM),and Local Interpretable Model-Agnostic Explanations(LIME),to improve bothmodel interpretability and feature selection.The study evaluates four different DL architectures(ResMLP,VGG16,Xception,and Convolutional Neural Network(CNN)with attention mechanism)on a balanced dataset of 3714 MRI brain scans from patients aged 70 and older.The proposed CNN with attention model achieved superior performance,demonstrating 99.18%accuracy on the primary dataset and 96.64% accuracy on the ADNI dataset,significantly advancing the state-of-the-art in AD classification.The ability of the framework to provide comprehensive,interpretable results through multiple visualization techniques while maintaining high classification accuracy represents a significant advancement in the computational diagnosis of AD,potentially enabling more accurate and earlier intervention in clinical settings.
基金supported via funding from Prince sattam bin Abdulaziz University project number(PSAU/2025/R/1446).
文摘Effective water distribution and transparency are threatened with being outrightly undermined unless the good name of urban infrastructure is maintained.With improved control systems in place to check leakage,variability of pressure,and conscientiousness of energy,issues that previously went unnoticed are now becoming recognized.This paper presents a grandiose hybrid framework that combines Multi-Agent Deep Reinforcement Learning(MADRL)with Shapley Additive Explanations(SHAP)-based Explainable AI(XAI)for adaptive and interpretable water resource management.In the methodology,the agents perform decentralized learning of the control policies for the pumps and valves based on the real-time network states,while also providing human-understandable explanations of the agents’decisions,using SHAP.This framework has been validated on five very diverse datasets,three of which are real-world scenarios involving actual water consumption from NYC and Alicante,with the other two being simulationbased standards such as LeakDB and the Water Distribution System Anomaly(WDSA)network.Empirical results demonstrate that the MADRL SHAP hybrid system reduces water loss by up to 32%,improves energy efficiency by+up to 25%,and maintains pressure stability between 91%and 93%,thereby outperforming the traditional rule-based control,single-agent DRL(Deep Reinforcement Learning),and XGBoost SHAP baselines.Furthermore,SHAP-based+interpretation brings transparency to the proposed model,with the average explanation consistency for all prediction models reaching 88%,thus further reinforcing the trustworthiness of the system on which the decision-making is based and empowering the utility operators to derive actionable insights from the model.The proposed framework addresses the critical challenges of smart water distribution.
基金supported by Innovative Human Resource Development for Local Intellectualization program through the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(IITP-2024-RS-2022-00156287,50%)supported by the MSIT(Ministry of Science and ICT),Republic of Korea,under the Convergence Security Core Talent Training Business Support Program(IITP-2024-RS-2022-II221203,50%)supervised by the IITP(Institute of Information&Communications Technology Planning&Evaluation).
文摘With the advancements in artificial intelligence(AI)technology,attackers are increasingly using sophisticated techniques,including ChatGPT.Endpoint Detection&Response(EDR)is a system that detects and responds to strange activities or security threats occurring on computers or endpoint devices within an organization.Unlike traditional antivirus software,EDR is more about responding to a threat after it has already occurred than blocking it.This study aims to overcome challenges in security control,such as increased log size,emerging security threats,and technical demands faced by control staff.Previous studies have focused on AI detection models,emphasizing detection rates and model performance.However,the underlying reasons behind the detection results were often insufficiently understood,leading to varying outcomes based on the learning model.Additionally,the presence of both structured or unstructured logs,the growth in new security threats,and increasing technical disparities among control staff members pose further challenges for effective security control.This study proposed to improve the problems of the existing EDR system and overcome the limitations of security control.This study analyzed data during the preprocessing stage to identify potential threat factors that influence the detection process and its outcomes.Additionally,eleven commonly-used machine learning(ML)models for malware detection in XAI were tested,with the five models showing the highest performance selected for further analysis.Explainable AI(XAI)techniques are employed to assess the impact of preprocessing on the learning process outcomes.To ensure objectivity and versatility in the analysis,five widely recognized datasets were used.Additionally,eleven commonly-used machine learning models for malware detection in XAI were tested with the five models showing the highest performance selected for further analysis.The results indicate that eXtreme Gradient Boosting(XGBoost)model outperformed others.Moreover,the study conducts an in-depth analysis of the preprocessing phase,tracing backward from the detection result to infer potential threats and classify the primary variables influencing the model’s prediction.This analysis includes the application of SHapley Additive exPlanations(SHAP),an XAI result,which provides insight into the influence of specific features on detection outcomes,and suggests potential breaches by identifying common parameters in malware through file backtracking and providing weights.This study also proposed a counter-detection analysis process to overcome the limitations of existing Deep Learning outcomes,understand the decision-making process of AI,and enhance reliability.These contributions are expected to significantly enhance EDR systems and address existing limitations in security control.
文摘People learn causal relations since childhood using counterfactual reasoning. Counterfactual reasoning uses counterfactual examples which take the form of “what if this has happened differently”. Counterfactual examples are also the basis of counterfactual explanation in explainable artificial intelligence (XAI). However, a framework that relies solely on optimization algorithms to find and present counterfactual samples cannot help users gain a deeper understanding of the system. Without a way to verify their understanding, the users can even be misled by such explanations. Such limitations can be overcome through an interactive and iterative framework that allows the users to explore their desired “what-if” scenarios. The purpose of our research is to develop such a framework. In this paper, we present our “what-if” XAI framework (WiXAI), which visualizes the artificial intelligence (AI) classification model from the perspective of the user’s sample and guides their “what-if” exploration. We also formulated how to use the WiXAI framework to generate counterfactuals and understand the feature-feature and feature-output relations in-depth for a local sample. These relations help move the users toward causal understanding.
基金supported by theCONAHCYT(Consejo Nacional deHumanidades,Ciencias y Tecnologias).
文摘The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and that healthcare workers understand the decisions made by these algorithms.These models can potentially enhance interpretability and explainability in decision-making processes that rely on artificial intelligence.Nevertheless,the intricate nature of the healthcare field necessitates the utilization of sophisticated models to classify cancer images.This research presents an advanced investigation of XAI models to classify cancer images.It describes the different levels of explainability and interpretability associated with XAI models and the challenges faced in deploying them in healthcare applications.In addition,this study proposes a novel framework for cancer image classification that incorporates XAI models with deep learning and advanced medical imaging techniques.The proposed model integrates several techniques,including end-to-end explainable evaluation,rule-based explanation,and useradaptive explanation.The proposed XAI reaches 97.72%accuracy,90.72%precision,93.72%recall,96.72%F1-score,9.55%FDR,9.66%FOR,and 91.18%DOR.It will discuss the potential applications of the proposed XAI models in the smart healthcare environment.It will help ensure trust and accountability in AI-based decisions,which is essential for achieving a safe and reliable smart healthcare environment.
基金Tsupported by EDF Energy R&D UK Centre Limited and EPSRC under Grant EP/V519625/1.
文摘Electricity markets are highly complex,involving lots of interactions and complex dependencies that make it hard to understand the inner workings of the market and what is driving prices.Econometric methods have been developed for this,white-box models,however,they are not as powerful as deep neural network models(DNN).In this paper,we use a DNN to forecast the price and then use XAI methods to understand the factors driving the price dynamics in the market.The objective is to increase our understanding of how different electricity markets work.To do that,we apply explainable methods such as SHAP and Gradient,combined with visual techniques like heatmaps(saliency maps)to analyse the behaviour and contributions of various features across five electricity markets.We introduce the novel concepts of SSHAP values and SSHAP lines to enhance the complex representation of high-dimensional tabular models.
基金supported by the J.Gustaf Richert Stiftelse(2023-00884)Energimyndigheten(P2021-00248)+3 种基金Svenska Forskningsrådet Formas(2022-01475)Kungl.Skogs-och Lantbruksakademien(GFS2023-0131BYG2023-0007GFS2024-0155)Royal Swedish Academy of Forestry and Agriculture(KSLA:GFS2023-0131,BYG2023-0007,GFS2024-0155)Anna and Nils Håkansson's Foundation(nhbidr24-6).
文摘The advancement of artificial intelligence(AI)in material design and engineering has led to significant improvements in predictive modeling of material properties.However,the lack of interpretability in machine learning(ML)-based material informatics presents a major barrier to its practical adoption.This study proposes a novel quantitative computational framework that integrates ML models with explainable artificial intelligence(XAI)techniques to enhance both predictive accuracy and interpretability in material property prediction.The framework systematically incorporates a structured pipeline,including data processing,feature selection,model training,performance evaluation,explainability analysis,and real-world deployment.It is validated through a representative case study on the prediction of high-performance concrete(HPC)compressive strength,utilizing a comparative analysis of ML models such as Random Forest,XGBoost,Support Vector Regression(SVR),and Deep Neural Networks(DNNs).The results demonstrate that XGBoost achieves the highest predictive performance(R^(2)=0.918),while SHAP(Shapley Additive Explanations)and LIME(Local Interpretable Model-Agnostic Explanations)provide detailed insights into feature importance and material interactions.Additionally,the deployment of the trained model as a cloud-based Flask-Gunicorn API enables real-time inference,ensuring its scalability and accessibility for industrial and research applications.The proposed framework addresses key limitations of existing ML approaches by integrating advanced explainability techniques,systematically handling nonlinear feature interactions,and providing a scalable deployment strategy.This study contributes to the development of interpretable and deployable AI-driven material informatics,bridging the gap between data-driven predictions and fundamental material science principles.
基金supported by National Natural Science Foundation of China(Nos.61772111 and 72010107002).
文摘Human adoption of artificial intelligence(AI)technique is largely hampered because of the increasing complexity and opacity of AI development.Explainable AI(XAI)techniques with various methods and tools have been developed to bridge this gap between high-performance black-box AI models and human understanding.However,the current adoption of XAI technique stil lacks"human-centered"guidance for designing proper solutions to meet different stakeholders'needs in XAI practice.We first summarize a human-centered demand framework to categorize different stakeholders into five key roles with specific demands by reviewing existing research and then extract six commonly used human-centered XAI evaluation measures which are helpful for validating the effect of XAI.In addition,a taxonomy of XAI methods is developed for visual computing with analysis of method properties.Holding clearer human demands and XAI methods in mind,we take a medical image diagnosis scenario as an example to present an overview of how extant XAI approaches for visual computing fulfil stakeholders'human-centered demands in practice.And we check the availability of open-source XAI tools for stakeholders'use.This survey provides further guidance for matching diverse human demands with appropriate XAI methods or tools in specific applications with a summary of main challenges and future work toward human-centered XAI in practice.
文摘乳腺癌持续位居全球女性癌症发病与致死的主要原因之列。早期且精确的诊断对于优化患者预后具有举足轻重的地位。乳房X线摄影、超声检查及磁共振成像(Magnetic Resonance Imaging, MRI)等影像学技术在乳腺癌的诊断中扮演着至关重要的角色。然而,这些技术手段面临着准确性波动、操作员依赖性显著及结果阐释困难等多重挑战。在此背景下,人工智能(Artificial Intelligence, AI),尤其是可解释人工智能(Explainable Artificial Intelligence, XAI)的融入,已成为提升诊断精确度及增强信任度的革命性途径。本综述聚焦于XAI技术在乳腺癌诊断领域内,于不同成像模式中的应用效果比较。深入探讨了核心的XAI方法,诸如Shapley加性解释(SHAP)、局部可解释模型无关解释(LIME)以及基于梯度的类激活映射(Grad-CAM),着重阐述了它们在增进模型可解释性及提升临床实用性方面的具体成效。综述不仅分析了XAI技术在乳房X线摄影、超声及MRI应用中的优势与局限,还特别强调了其在提高AI辅助预测透明度方面的贡献。此外,本文亦评估了XAI在应对假阳性、假阴性问题以及多模态成像数据整合挑战中的效能。该评论的核心价值在于,它全面剖析了XAI在缩小AI技术进展与临床实际应用之间鸿沟的潜力。通过提升透明度,XAI技术能够增强临床医生对AI的信任度,促进其更顺畅地融入诊断工作流程,从而助力个性化医疗实践的推进及患者治疗成效的改善。综上所述,尽管XAI在提升AI模型可解释性与准确性方面取得了显著进展,但在计算复杂度控制、普遍适用性拓展及临床接纳度提升等方面仍面临诸多挑战。未来研究应着重于优化XAI方法、促进跨学科间的深度合作,并开发标准化的框架体系,以确保XAI技术能在多样化的临床环境中实现可扩展性与可靠性的双重提升。Breast cancer remains one of the leading causes of cancer incidence and mortality among women worldwide. Early and accurate diagnosis plays a pivotal role in optimizing patient prognosis. Imaging techniques such as mammography, ultrasound, and magnetic resonance imaging (MRI) play crucial roles in the diagnosis of breast cancer. However, these techniques face multiple challenges, including accuracy fluctuations, significant operator dependency, and difficulties in result interpretation. In this context, the integration of Artificial Intelligence (AI), especially Explainable Artificial Intelligence (XAI), has become a revolutionary approach to improving diagnostic accuracy and enhancing trust. This review focuses on the comparative application of XAI technologies across different imaging modalities in breast cancer diagnosis. It delves into core XAI methods such as Shapley Additive Explanations (SHAP), Local Interpretable Model-Agnostic Explanations (LIME), and Gradient-weighted Class Activation Mapping (Grad-CAM), with an emphasis on their effectiveness in enhancing model interpretability and improving clinical utility. The review analyzes not only the advantages and limitations of XAI in mammography, ultrasound, and MRI applications but also highlights its contribution to increasing the transparency of AI-assisted predictions. Additionally, the review evaluates the performance of XAI in addressing issues related to false positives, false negatives, and the challenges of multimodal imaging data integration. The core value of this review lies in its comprehensive analysis of the potential of XAI in bridging the gap between advancements in AI technology and clinical application. By enhancing transparency, XAI can boost clinicians’ trust in AI, facilitating its smoother integration into diagnostic workflows, thereby promoting personalized medical practices and improving patient treatment outcomes. In conclusion, despite significant progress made by XAI in improving AI model interpretability and accuracy, challenges remain in terms of computational complexity, general applicability, and clinical acceptance. Future research should focus on optimizing XAI methods, fostering interdisciplinary collaboration, and developing standardized frameworks to ensure the scalability and reliability of XAI technologies in diverse clinical environments.
文摘The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic.Cloud environments pose significant challenges in maintaining privacy and security.Global approaches,such as IDS,have been developed to tackle these issues.However,most conventional Intrusion Detection System(IDS)models struggle with unseen cyberattacks and complex high-dimensional data.In fact,this paper introduces the idea of a novel distributed explainable and heterogeneous transformer-based intrusion detection system,named INTRUMER,which offers balanced accuracy,reliability,and security in cloud settings bymultiplemodulesworking together within it.The traffic captured from cloud devices is first passed to the TC&TM module in which the Falcon Optimization Algorithm optimizes the feature selection process,and Naie Bayes algorithm performs the classification of features.The selected features are classified further and are forwarded to the Heterogeneous Attention Transformer(HAT)module.In this module,the contextual interactions of the network traffic are taken into account to classify them as normal or malicious traffic.The classified results are further analyzed by the Explainable Prevention Module(XPM)to ensure trustworthiness by providing interpretable decisions.With the explanations fromthe classifier,emergency alarms are transmitted to nearby IDSmodules,servers,and underlying cloud devices for the enhancement of preventive measures.Extensive experiments on benchmark IDS datasets CICIDS 2017,Honeypots,and NSL-KDD were conducted to demonstrate the efficiency of the INTRUMER model in detecting network trafficwith high accuracy for different types.Theproposedmodel outperforms state-of-the-art approaches,obtaining better performance metrics:98.7%accuracy,97.5%precision,96.3%recall,and 97.8%F1-score.Such results validate the robustness and effectiveness of INTRUMER in securing diverse cloud environments against sophisticated cyber threats.