期刊文献+
共找到211篇文章
< 1 2 11 >
每页显示 20 50 100
Explainable AI for predicting the strength of bio-cemented sands
1
作者 Waleed El-Sekelly Muhammad Nouman Amjad Raja Tarek Abdoun 《Journal of Rock Mechanics and Geotechnical Engineering》 2026年第2期1552-1569,共18页
The biological stabilization of soil using microbially induced carbonate precipitation(MICP)employs ureolytic bacteria to precipitate calcium carbonate(CaCO3),which binds soil particles,enhancing strength,stiffness,an... The biological stabilization of soil using microbially induced carbonate precipitation(MICP)employs ureolytic bacteria to precipitate calcium carbonate(CaCO3),which binds soil particles,enhancing strength,stiffness,and erosion resistance.The unconfinedcompressive strength(UCS),a key measure of soil strength,is critical in geotechnical engineering as it directly reflectsthe mechanical stability of treated soils.This study integrates explainable artificialintelligence(XAI)with geotechnical insights to model the UCS of MICP-treated sands.Using 517 experimental data points and a combination of various input variables—including median grain size(D50),coefficientof uniformity(Cu),void ratio(e),urea concentration(Mu),calcium concentration(Mc),optical density(OD)of bacterial solution,pH,and total injection volume(Vt)—fivemachine learning(ML)models,including eXtreme gradient boosting(XGBoost),Light gradient boosting machine(LightGBM),random forest(RF),gene expression programming(GEP),and multivariate adaptive regression splines(MARS),were developed and optimized.The ensemble models(XGBoost,LightGBM,and RF)were optimized using the Chernobyl disaster optimizer(CDO),a recently developed metaheuristic algorithm.Of these,LightGBM-CDO achieved the highest accuracy for UCS prediction.XAI techniques like feature importance analysis(FIA),SHapley additive exPlanations(SHAP),and partial dependence plots(PDPs)were also used to investigate the complex non-linear relationships between the input and output variables.The results obtained have demonstrated that the XAI-driven models can enhance the predictive accuracy and interpretability of MICP processes,offering a sustainable pathway for optimizing geotechnical applications. 展开更多
关键词 Microbially induced carbonate precipitation(MICP) Bio-cementation Unconfined compressive strength(UCS) explainable artificialintelligence(xai) Optimization
在线阅读 下载PDF
Graph-Based Intrusion Detection with Explainable Edge Classification Learning
2
作者 Jaeho Shin Jaekwang Kim 《Computers, Materials & Continua》 2026年第1期610-635,共26页
Network attacks have become a critical issue in the internet security domain.Artificial intelligence technology-based detection methodologies have attracted attention;however,recent studies have struggled to adapt to ... Network attacks have become a critical issue in the internet security domain.Artificial intelligence technology-based detection methodologies have attracted attention;however,recent studies have struggled to adapt to changing attack patterns and complex network environments.In addition,it is difficult to explain the detection results logically using artificial intelligence.We propose a method for classifying network attacks using graph models to explain the detection results.First,we reconstruct the network packet data into a graphical structure.We then use a graph model to predict network attacks using edge classification.To explain the prediction results,we observed numerical changes by randomly masking and calculating the importance of neighbors,allowing us to extract significant subgraphs.Our experiments on six public datasets demonstrate superior performance with an average F1-score of 0.960 and accuracy of 0.964,outperforming traditional machine learning and other graph models.The visual representation of the extracted subgraphs highlights the neighboring nodes that have the greatest impact on the results,thus explaining detection.In conclusion,this study demonstrates that graph-based models are suitable for network attack detection in complex environments,and the importance of graph neighbors can be calculated to efficiently analyze the results.This approach can contribute to real-world network security analyses and provide a new direction in the field. 展开更多
关键词 Intrusion detection graph neural network explainable ai network attacks GraphSAGE
在线阅读 下载PDF
Cascading Class Activation Mapping:A Counterfactual Reasoning-Based Explainable Method for Comprehensive Feature Discovery
3
作者 Seoyeon Choi Hayoung Kim Guebin Choi 《Computer Modeling in Engineering & Sciences》 2026年第2期1043-1069,共27页
Most Convolutional Neural Network(CNN)interpretation techniques visualize only the dominant cues that the model relies on,but there is no guarantee that these represent all the evidence the model uses for classificati... Most Convolutional Neural Network(CNN)interpretation techniques visualize only the dominant cues that the model relies on,but there is no guarantee that these represent all the evidence the model uses for classification.This limitation becomes critical when hidden secondary cues—potentially more meaningful than the visualized ones—remain undiscovered.This study introduces CasCAM(Cascaded Class Activation Mapping)to address this fundamental limitation through counterfactual reasoning.By asking“if this dominant cue were absent,what other evidence would the model use?”,CasCAM progressively masks the most salient features and systematically uncovers the hierarchy of classification evidence hidden beneath them.Experimental results demonstrate that CasCAM effectively discovers the full spectrum of reasoning evidence and can be universally applied with nine existing interpretation methods. 展开更多
关键词 explainable ai class activation mapping counterfactual reasoning shortcut learning feature discovery
在线阅读 下载PDF
Optimizing UCS Prediction Models through XAI-Based Feature Selection in Soil Stabilization
4
作者 Ahmed Mohammed Awad Mohammed Omayma Husain +5 位作者 Mosab Hamdan Abdalmomen Mohammed Abdullah Ansari Atef Badr Abubakar Elsafi Abubakr Siddig 《Computer Modeling in Engineering & Sciences》 2026年第2期524-549,共26页
Unconfined Compressive Strength(UCS)is a key parameter for the assessment of the stability and performance of stabilized soils,yet traditional laboratory testing is both time and resource intensive.In this study,an in... Unconfined Compressive Strength(UCS)is a key parameter for the assessment of the stability and performance of stabilized soils,yet traditional laboratory testing is both time and resource intensive.In this study,an interpretable machine learning approach to UCS prediction is presented,pairing five models(Random Forest(RF),Gradient Boosting(GB),Extreme Gradient Boosting(XGB),CatBoost,and K-Nearest Neighbors(KNN))with SHapley Additive exPlanations(SHAP)for enhanced interpretability and to guide feature removal.A complete dataset of 12 geotechnical and chemical parameters,i.e.,Atterberg limits,compaction properties,stabilizer chemistry,dosage,curing time,was used to train and test the models.R2,RMSE,MSE,and MAE were used to assess performance.Initial results with all 12 features indicated that boosting-based models(GB,XGB,CatBoost)exhibited the highest predictive accuracy(R^(2)=0.93)with satisfactory generalization on test data,followed by RF and KNN.SHAP analysis consistently picked CaO content,curing time,stabilizer dosage,and compaction parameters as the most important features,aligning with established soil stabilization mechanisms.Models were then re-trained on the top 8 and top 5 SHAP-ranked features.Interestingly,GB,XGB,and CatBoost maintained comparable accuracy with reduced input sets,while RF was moderately sensitive and KNN was somewhat better owing to reduced dimensionality.The findings confirm that feature reduction through SHAP enables cost-effective UCS prediction through the reduction of laboratory test requirements without significant accuracy loss.The suggested hybrid approach offers an explainable,interpretable,and cost-effective tool for geotechnical engineering practice. 展开更多
关键词 explainable ai feature selection machine learning SHAP analysis soil stabilization unconfined compressive strength
在线阅读 下载PDF
XAI背景下司法人工智能的可解释性义务研究——基于司法真诚的理论视角 被引量:3
5
作者 冯玉军 沈鸿艺 《北京航空航天大学学报(社会科学版)》 2025年第4期29-41,共13页
面对人工智能的“黑箱”问题,可解释人工智能(XAI)被认为是增强司法裁判领域可解释性的有效工具。然而,XAI中的可解释性含义反而背离了司法裁判领域的可解释性要求。从司法真诚的理论视角切入可解释性,可以根据“是否考量法官主观状态... 面对人工智能的“黑箱”问题,可解释人工智能(XAI)被认为是增强司法裁判领域可解释性的有效工具。然而,XAI中的可解释性含义反而背离了司法裁判领域的可解释性要求。从司法真诚的理论视角切入可解释性,可以根据“是否考量法官主观状态”的标准将司法真诚的含义分为主观真诚与客观真诚两个层面,来实现以合理理由指引行动的司法价值。研究发现,XAI事后解释的特性不符合主观真诚的要求导致其背离了司法价值,而可说明人工智能(IAI)可以符合主观层面司法真诚的要求。因此,应根据《中华人民共和国个人信息保护法》中关于自动化决策的说明义务规定,具体化司法人工智能部署者、提供者以及个案裁判法官的说明义务,保证司法裁判的关键环节使用IAI而非XAI,从而实现司法真诚的核心要求。 展开更多
关键词 司法人工智能 可解释性 司法真诚 可解释人工智能(xai) 可说明人工智能(Iai)
在线阅读 下载PDF
“能—智分合”:司法AI的分阶段发展模式
6
作者 孙笑侠 魏义铭 《交大法学》 北大核心 2026年第2期5-19,共15页
在司法人工智能明确定位于“辅助”后,未来中国司法人工智能的发展方向仍应是提升辅助型司法AI之“智”。借鉴“德雷福斯模式”,基于AI“能”与“智”二维可分离关系,我国可以建构辅助型司法AI“能—智分合”模式。当前提升司法AI“智... 在司法人工智能明确定位于“辅助”后,未来中国司法人工智能的发展方向仍应是提升辅助型司法AI之“智”。借鉴“德雷福斯模式”,基于AI“能”与“智”二维可分离关系,我国可以建构辅助型司法AI“能—智分合”模式。当前提升司法AI“智”存在因果判断和价值判断的技术瓶颈,但并非没有技术解决方案。我国辅助型司法AI“智”的提升可通过三种技术方案分阶段实现:一是“与法官一起思考”,以可解释性为抓手,在形式层面实现可用性的突破;二是“与法官的思考对齐”,在法律专业技能与职业伦理的约束下推进价值对齐;三是“像法官一样思考”,攀登因果判断和价值判断的阶梯,在“辅助”定位下增强对复杂裁量问题的推理支撑。 展开更多
关键词 司法人工智能 辅助型司法ai 德雷福斯模式 可解释性 价值对齐
在线阅读 下载PDF
Unveiling dominant factors for gully distribution in wildfire-affected areas using explainable AI:A case study of Xiangjiao catchment,Southwest China 被引量:2
7
作者 ZHOU Ruichen HU Xiewen +3 位作者 XI Chuanjie HE Kun DENG Lin LUO Gang 《Journal of Mountain Science》 2025年第8期2765-2792,共28页
Wildfires significantly disrupt the physical and hydrologic conditions of the environment,leading to vegetation loss and altered surface geo-material properties.These complex dynamics promote post-fire gully erosion,y... Wildfires significantly disrupt the physical and hydrologic conditions of the environment,leading to vegetation loss and altered surface geo-material properties.These complex dynamics promote post-fire gully erosion,yet the key conditioning factors(e.g.,topography,hydrology)remain insufficiently understood.This study proposes a novel artificial intelligence(AI)framework that integrates four machine learning(ML)models with Shapley Additive Explanations(SHAP)method,offering a hierarchical perspective from global to local on the dominant factors controlling gully distribution in wildfireaffected areas.In a case study of Xiangjiao catchment burned on March 28,2020,in Muli County in Sichuan Province of Southwest China,we derived 21 geoenvironmental factors to assess the susceptibility of post-fire gully erosion using logistic regression(LR),support vector machine(SVM),random forest(RF),and convolutional neural network(CNN)models.SHAP-based model interpretation revealed eight key conditioning factors:topographic position index(TPI),topographic wetness index(TWI),distance to stream,mean annual precipitation,differenced normalized burn ratio(d NBR),land use/cover,soil type,and distance to road.Comparative model evaluation demonstrated that reduced-variable models incorporating these dominant factors achieved accuracy comparable to that of the initial-variable models,with AUC values exceeding 0.868 across all ML algorithms.These findings provide critical insights into gully erosion behavior in wildfire-affected areas,supporting the decision-making process behind environmental management and hazard mitigation. 展开更多
关键词 Gully erosion susceptibility explainable ai WILDFIRE Geo-environmental factor Machine learning
原文传递
Intrumer:A Multi Module Distributed Explainable IDS/IPS for Securing Cloud Environment
8
作者 Nazreen Banu A S.K.B.Sangeetha 《Computers, Materials & Continua》 SCIE EI 2025年第1期579-607,共29页
The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic.Cloud environments pose significant challenges in maintaining privacy and security.Global approach... The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic.Cloud environments pose significant challenges in maintaining privacy and security.Global approaches,such as IDS,have been developed to tackle these issues.However,most conventional Intrusion Detection System(IDS)models struggle with unseen cyberattacks and complex high-dimensional data.In fact,this paper introduces the idea of a novel distributed explainable and heterogeneous transformer-based intrusion detection system,named INTRUMER,which offers balanced accuracy,reliability,and security in cloud settings bymultiplemodulesworking together within it.The traffic captured from cloud devices is first passed to the TC&TM module in which the Falcon Optimization Algorithm optimizes the feature selection process,and Naie Bayes algorithm performs the classification of features.The selected features are classified further and are forwarded to the Heterogeneous Attention Transformer(HAT)module.In this module,the contextual interactions of the network traffic are taken into account to classify them as normal or malicious traffic.The classified results are further analyzed by the Explainable Prevention Module(XPM)to ensure trustworthiness by providing interpretable decisions.With the explanations fromthe classifier,emergency alarms are transmitted to nearby IDSmodules,servers,and underlying cloud devices for the enhancement of preventive measures.Extensive experiments on benchmark IDS datasets CICIDS 2017,Honeypots,and NSL-KDD were conducted to demonstrate the efficiency of the INTRUMER model in detecting network trafficwith high accuracy for different types.Theproposedmodel outperforms state-of-the-art approaches,obtaining better performance metrics:98.7%accuracy,97.5%precision,96.3%recall,and 97.8%F1-score.Such results validate the robustness and effectiveness of INTRUMER in securing diverse cloud environments against sophisticated cyber threats. 展开更多
关键词 Cloud computing intrusion detection system TRANSFORMERS and explainable artificial intelligence(xai)
在线阅读 下载PDF
XAI人机信任机制探索与实践
9
作者 罗中岩 夏正勋 +4 位作者 唐剑飞 杨一帆 杨洪山 李昊骅 张燕 《大数据》 2025年第4期102-125,共24页
人工智能(AI)技术在各行业的应用取得了显著进展,但AI的黑盒问题、潜在风险以及由此引发的用户信任危机,限制了其进一步推广和应用。针对AI的信任问题,提出了一种通用的U-XAI(unified-trustworthy XAI)人机信任机制及治理框架,旨在解决A... 人工智能(AI)技术在各行业的应用取得了显著进展,但AI的黑盒问题、潜在风险以及由此引发的用户信任危机,限制了其进一步推广和应用。针对AI的信任问题,提出了一种通用的U-XAI(unified-trustworthy XAI)人机信任机制及治理框架,旨在解决AI领域中“社会技术鸿沟”带来的8类信任挑战。该框架包括信任链治理、完整性治理、可理解性治理和可接受性治理4个模块,结合理论模型和技术实践,为构建可信AI提供了全方位的解决方案。本文提出的U-XAI框架可以有效提升AI系统的可信度,促进其在各社会领域的应用与发展,为AI的可信治理提供实践依据和参考。 展开更多
关键词 人机信任 可解释ai 可信任ai
在线阅读 下载PDF
Next-Generation Lightweight Explainable AI for Cybersecurity: A Review on Transparency and Real-Time Threat Mitigation
10
作者 Khulud Salem Alshudukhi Sijjad Ali +1 位作者 Mamoona Humayun Omar Alruwaili 《Computer Modeling in Engineering & Sciences》 2025年第12期3029-3085,共57页
Problem:The integration of Artificial Intelligence(AI)into cybersecurity,while enhancing threat detection,is hampered by the“black box”nature of complex models,eroding trust,accountability,and regulatory compliance.... Problem:The integration of Artificial Intelligence(AI)into cybersecurity,while enhancing threat detection,is hampered by the“black box”nature of complex models,eroding trust,accountability,and regulatory compliance.Explainable AI(XAI)aims to resolve this opacity but introduces a critical newvulnerability:the adversarial exploitation of model explanations themselves.Gap:Current research lacks a comprehensive synthesis of this dual role of XAI in cybersecurity—as both a tool for transparency and a potential attack vector.There is a pressing need to systematically analyze the trade-offs between interpretability and security,evaluate defense mechanisms,and outline a path for developing robust,next-generation XAI frameworks.Solution:This review provides a systematic examination of XAI techniques(e.g.,SHAP,LIME,Grad-CAM)and their applications in intrusion detection,malware analysis,and fraud prevention.It critically evaluates the security risks posed by XAI,including model inversion and explanation-guided evasion attacks,and assesses corresponding defense strategies such as adversarially robust training,differential privacy,and secure-XAI deployment patterns.Contribution:Theprimary contributions of this work are:(1)a comparative analysis of XAI methods tailored for cybersecurity contexts;(2)an identification of the critical trade-off betweenmodel interpretability and security robustness;(3)a synthesis of defense mechanisms to mitigate XAI-specific vulnerabilities;and(4)a forward-looking perspective proposing future research directions,including quantum-safe XAI,hybrid neuro-symbolic models,and the integration of XAI into Zero Trust Architectures.This review serves as a foundational resource for developing transparent,trustworthy,and resilient AI-driven cybersecurity systems. 展开更多
关键词 explainable ai(xai) CYBERSECURITY adversarial robustness privacy-preserving techniques regulatory compliance zero trust architecture
在线阅读 下载PDF
PPG Based Digital Biomarker for Diabetes Detection with Multiset Spatiotemporal Feature Fusion and XAI
11
作者 Mubashir Ali Jingzhen Li Zedong Nie 《Computer Modeling in Engineering & Sciences》 2025年第12期4153-4177,共25页
Diabetes imposes a substantial burden on global healthcare systems.Worldwide,nearly half of individuals with diabetes remain undiagnosed,while conventional diagnostic techniques are often invasive,painful,and expensiv... Diabetes imposes a substantial burden on global healthcare systems.Worldwide,nearly half of individuals with diabetes remain undiagnosed,while conventional diagnostic techniques are often invasive,painful,and expensive.In this study,we propose a noninvasive approach for diabetes detection using photoplethysmography(PPG),which is widely integrated into modern wearable devices.First,we derived velocity plethysmography(VPG)and acceleration plethysmography(APG)signals from PPG to construct multi-channel waveform representations.Second,we introduced a novel multiset spatiotemporal feature fusion framework that integrates hand-crafted temporal,statistical,and nonlinear features with recursive feature elimination and deep feature extraction using a one-dimensional statistical convolutional neural network(1DSCNN).Finally,we developed an interpretable diabetes detection method based on XGBoost,with explainable artificial intelligence(XAI)techniques.Specifically,SHapley Additive exPlanations(SHAP)and Local InterpretableModel-agnostic Explanations(LIME)were employed to identify and interpret potential digital biomarkers associated with diabetes.To validate the proposed method,we extended the publicly available Guilin People’s Hospital dataset by incorporating in-house clinical data from ten subjects,thereby enhancing data diversity.A subject-independent cross-validation strategy was applied to ensure that the testing subjects remained independent of the training data for robust generalization.Compared with existing state-of-the-art methods,our approach achieved superior performance,with an area under the curve(AUC)of 80.5±15.9%,sensitivity of 77.2±7.5%,and specificity of 64.3±18.2%.These results demonstrate that the proposed approach provides a noninvasive,interpretable,and accessible solution for diabetes detection using PPG signals. 展开更多
关键词 Diabetes detection photoplethysmography(PPG) spatiotemporal fusion subject-independent validation digital biomarker explainable ai(xai)
在线阅读 下载PDF
Robust False Data Injection Identification Framework for Power Systems Using Explainable Deep Learning
12
作者 Ghadah Aldehim Shakila Basheer +1 位作者 Ala Saleh Alluhaidan Sapiah Sakri 《Computers, Materials & Continua》 2025年第11期3599-3619,共21页
Although digital changes in power systems have added more ways to monitor and control them,these changes have also led to new cyber-attack risks,mainly from False Data Injection(FDI)attacks.If this happens,the sensors... Although digital changes in power systems have added more ways to monitor and control them,these changes have also led to new cyber-attack risks,mainly from False Data Injection(FDI)attacks.If this happens,the sensors and operations are compromised,which can lead to big problems,disruptions,failures and blackouts.In response to this challenge,this paper presents a reliable and innovative detection framework that leverages Bidirectional Long Short-Term Memory(Bi-LSTM)networks and employs explanatory methods from Artificial Intelligence(AI).Not only does the suggested architecture detect potential fraud with high accuracy,but it also makes its decisions transparent,enabling operators to take appropriate action.Themethod developed here utilizesmodel-free,interpretable tools to identify essential input elements,thereby making predictions more understandable and usable.Enhancing detection performance is made possible by correcting class imbalance using Synthetic Minority Over-sampling Technique(SMOTE)-based data balancing.Benchmark power system data confirms that the model functions correctly through detailed experiments.Experimental results showed that Bi-LSTM+Explainable AI(XAI)achieved an average accuracy of 94%,surpassing XGBoost(89%)and Bagging(84%),while ensuring explainability and a high level of robustness across various operating scenarios.By conducting an ablation study,we find that bidirectional recursive modeling and ReLU activation help improve generalization and model predictability.Additionally,examining model decisions through LIME enables us to identify which features are crucial for making smart grid operational decisions in real time.The research offers a practical and flexible approach for detecting FDI attacks,improving the security of cyber-physical systems,and facilitating the deployment of AI in energy infrastructure. 展开更多
关键词 False data injection attacks bidirectional long short-term memory(Bi-LSTM) explainable ai(xai) power systems
在线阅读 下载PDF
An Efficient Explainable AI Model for Accurate Brain Tumor Detection Using MRI Images
13
作者 Fatma M.Talaat Mohamed Salem +1 位作者 Mohamed Shehata Warda M.Shaban 《Computer Modeling in Engineering & Sciences》 2025年第8期2325-2358,共34页
The diagnosis of brain tumors is an extended process that significantly depends on the expertise and skills of radiologists.The rise in patient numbers has substantially elevated the data processing volume,making conv... The diagnosis of brain tumors is an extended process that significantly depends on the expertise and skills of radiologists.The rise in patient numbers has substantially elevated the data processing volume,making conventional methods both costly and inefficient.Recently,Artificial Intelligence(AI)has gained prominence for developing automated systems that can accurately diagnose or segment brain tumors in a shorter time frame.Many researchers have examined various algorithms that provide both speed and accuracy in detecting and classifying brain tumors.This paper proposes a newmodel based on AI,called the Brain Tumor Detection(BTD)model,based on brain tumor Magnetic Resonance Images(MRIs).The proposed BTC comprises three main modules:(i)Image Processing Module(IPM),(ii)Patient Detection Module(PDM),and(iii)Explainable AI(XAI).In the first module(i.e.,IPM),the used dataset is preprocessed through two stages:feature extraction and feature selection.At first,the MRI is preprocessed,then the images are converted into a set of features using several feature extraction methods:gray level co-occurrencematrix,histogramof oriented gradient,local binary pattern,and Tamura feature.Next,the most effective features are selected fromthese features separately using ImprovedGrayWolfOptimization(IGWO).IGWOis a hybrid methodology that consists of the Filter Selection Step(FSS)using information gain ratio as an initial selection stage and Binary Gray Wolf Optimization(BGWO)to make the proposed method better at detecting tumors by further optimizing and improving the chosen features.Then,these features are fed to PDM using several classifiers,and the final decision is based on weighted majority voting.Finally,through Local Interpretable Model-agnostic Explanations(LIME)XAI,the interpretability and transparency in decision-making processes are provided.The experiments are performed on a publicly available Brain MRI dataset that consists of 98 normal cases and 154 abnormal cases.During the experiments,the dataset was divided into 70%(177 cases)for training and 30%(75 cases)for testing.The numerical findings demonstrate that the BTD model outperforms its competitors in terms of accuracy,precision,recall,and F-measure.It introduces 98.8%accuracy,97%precision,97.5%recall,and 97.2%F-measure.The results demonstrate the potential of the proposed model to revolutionize brain tumor diagnosis,contribute to better treatment strategies,and improve patient outcomes. 展开更多
关键词 Brain tumor detection MRI images explainable ai(xai) improved gray wolf optimization(IGWO)
在线阅读 下载PDF
A Systematic Review of Multimodal Fusion and Explainable AI Applications in Breast Cancer Diagnosis
14
作者 Deema Alzamil Bader Alkhamees Mohammad Mehedi Hassan 《Computer Modeling in Engineering & Sciences》 2025年第12期2971-3027,共57页
Breast cancer diagnosis relies heavily on many kinds of information from diverse sources—like mammogram images,ultrasound scans,patient records,and genetic tests—but most AI tools look at only one of these at a time... Breast cancer diagnosis relies heavily on many kinds of information from diverse sources—like mammogram images,ultrasound scans,patient records,and genetic tests—but most AI tools look at only one of these at a time,which limits their ability to produce accurate and comprehensive decisions.In recent years,multimodal learning has emerged,enabling the integration of heterogeneous data to improve performance and diagnostic accuracy.However,doctors cannot always see how or why these AI tools make their choices,which is a significant bottleneck in their reliability,along with adoption in clinical settings.Hence,people are adding explainable AI techniques that show the steps the model takes.This review investigates previous work that has employed multimodal learning and XAI for the diagnosis of breast cancer.It discusses the types of data,fusion techniques,and XAI models employed.It was done following the PRISMA guidelines and included studies from 2021 to April 2025.The literature search was performed systematically and resulted in 61 studies.The review highlights a gradual increase in current studies focusing on multimodal fusion and XAI,particularly in the years 2023–2024.It found that studies using multi-modal data fusion achieved the highest accuracy by 5%–10%on average compared to other studies that used single-modality data,an intermediate fusion strategy,and modern fusion techniques,such as cross attention,achieved the highest accuracy and best performance.The review also showed that SHAP,Grad-CAM,and LIME techniques are the most used in explaining breast cancer diagnostic models.There is a clear research shift toward integrating multimodal learning and XAI techniques into the breast cancer diagnostics field.However,several gaps were identified,including the scarcity of public multimodal datasets.Lack of a unified explainable framework in multimodal fusion systems,and lack of standardization in evaluating explanations.These limitations call for future research focused on building more shared datasets and integrating multimodal data and explainable AI techniques to improve decision-making and enhance transparency. 展开更多
关键词 Breast cancer CLASSIFICATION explainable artificial intelligence xai deep learning multi-modal data explainability data fusion
在线阅读 下载PDF
Alternative Lens to Understand the Relationships Between Neighborhood Environment and Well-being with Capability Approach and Explainable Artificial Intelligence
15
作者 JIAO Linshen ZHANG Min +4 位作者 ZHEN Feng QIN Xiao CHEN Peipei ZHANG Shanqi HU Yuchen 《Chinese Geographical Science》 2025年第3期472-491,共20页
The relationship between the neighborhood environment and well-being is attracting increasingly attention from researchers and policymakers,as the goal of development has shift from economy to well-being.However,exist... The relationship between the neighborhood environment and well-being is attracting increasingly attention from researchers and policymakers,as the goal of development has shift from economy to well-being.However,existing literature predominantly adopts the utilitarian approach,understanding well-being as people’s feelings about their lives and viewing the neighborhood environment as resources that benefit well-being.The Capability Approach,a novel approach that conceptualize well-being as the freedoms to do or to be and regard environment as conversion factors that influence well-being,can offer new lens by incorporating human development in-to these topics.This paper proposes an alternative theoretical framework:well-being is conceptualized and measured by capability;neighborhood environment affects well-being by providing spatial services,functioning as environmental conversion factors,and serving as social conversion factors.We conducted a case study of Changshu City located in eastern China,utilizing multiple resource data,applying explainable artificial intelligence(XAI),namely eXtreme Gradient Boosting(XGBoost)and SHapley Additive exPlana-tions(SHAP).Our findings highlight the significance of viewing the neighborhood environment as a set of conversion factors,as it provides more explanatory power than providing spatial services.Compared to conventional research based on linear relationship as-sumption,our results demonstrate that the effects of neighborhood environment on well-being are non-linear,characterized by threshold effects and interaction effects.These insights are crucial for informing urban planning and public policy.This research enriches our un-derstanding of well-being,neighborhood environment,and their relationship as well as provides empirical evidence for the core concept of conversion factors in the capability approach. 展开更多
关键词 WELL-BEING neighborhood environment capability approach non-linear relationship explainable artificial intelligence(xai)
在线阅读 下载PDF
AutoSHARC: Feedback Driven Explainable Intrusion Detection with SHAP-Guided Post-Hoc Retraining for QoS Sensitive IoT Networks
16
作者 Muhammad Saad Farooqui Aizaz Ahmad Khattak +4 位作者 Bakri Hossain Awaji Nazik Alturki Noha Alnazzawi Muhammad Hanif Muhammad Shahbaz Khan 《Computer Modeling in Engineering & Sciences》 2025年第12期4395-4439,共45页
Quality of Service(QoS)assurance in programmable IoT and 5G networks is increasingly threatened by cyberattacks such as Distributed Denial of Service(DDoS),spoofing,and botnet intrusions.This paper presents AutoSHARC,... Quality of Service(QoS)assurance in programmable IoT and 5G networks is increasingly threatened by cyberattacks such as Distributed Denial of Service(DDoS),spoofing,and botnet intrusions.This paper presents AutoSHARC,a feedback-driven,explainable intrusion detection framework that integrates Boruta and LightGBM–SHAP feature selection with a lightweight CNN–Attention–GRU classifier.AutoSHARC employs a two-stage feature selection pipeline to identify the most informative features from high-dimensional IoT traffic and reduces 46 features to 30 highly informative ones,followed by post-hoc SHAP-guided retraining to refine feature importance,forming a feedback loopwhere only the most impactful attributes are reused to retrain themodel.This iterative refinement reduces computational overhead,accelerates detection latency,and improves transparency.Evaluated on the CIC IoT 2023 dataset,AutoSHARC achieves 98.98%accuracy,98.9%F1-score,and strong robustness with a Matthews Correlation Coefficient of 0.98 and Cohen’s Kappa of 0.98.The final model contains only 531,272 trainable parameters with a compact 2 MB size,enabling real-time deployment on resource-constrained IoT nodes.By combining explainable AI with iterative feature refinement,AutoSHARC provides scalable and trustworthy intrusion detection while preserving key QoS indicators such as latency,throughput,and reliability. 展开更多
关键词 QoS preservation intelligent programmable networks intrusion detection IoT security feature selection SHAP explainability Boruta LightGBM explainable deep learning resource-efficient ai
在线阅读 下载PDF
Explainable AI for epileptic seizure detection in Internet of Medical Things
17
作者 Faiq Ahmad Khan Zainab Umar +1 位作者 Alireza Jolfaei Muhammad Tariq 《Digital Communications and Networks》 2025年第3期587-593,共7页
In the field of precision healthcare,where accurate decision-making is paramount,this study underscores the indispensability of eXplainable Artificial Intelligence(XAI)in the context of epilepsy management within the ... In the field of precision healthcare,where accurate decision-making is paramount,this study underscores the indispensability of eXplainable Artificial Intelligence(XAI)in the context of epilepsy management within the Internet of Medical Things(IoMT).The methodology entails meticulous preprocessing,involving the application of a band-pass filter and epoch segmentation to optimize the quality of Electroencephalograph(EEG)data.The subsequent extraction of statistical features facilitates the differentiation between seizure and non-seizure patterns.The classification phase integrates Support Vector Machine(SVM),K-Nearest Neighbor(KNN),and Random Forest classifiers.Notably,SVM attains an accuracy of 97.26%,excelling in the precision,recall,specificity,and F1 score for identifying seizures and non-seizure instances.Conversely,KNN achieves an accuracy of 72.69%,accompanied by certain trade-offs.The Random Forest classifierstands out with a remarkable accuracy of 99.89%,coupled with an exceptional precision(99.73%),recall(100%),specificity(99.80%),and F1 score(99.86%),surpassing both SVM and KNN performances.XAI techniques,namely Local Interpretable ModelAgnostic Explanations(LIME)and SHapley Additive exPlanation(SHAP),enhance the system’s transparency.This combination of machine learning and XAI not only improves the reliability and accuracy of the seizure detection system but also enhances trust and interpretability.Healthcare professionals can leverage the identified important features and their dependencies to gain deeper insights into the decision-making process,aiding in informed diagnosis and treatment decisions for patients with epilepsy. 展开更多
关键词 Epileptic seizure EPILEPSY EEG explainable ai Machine learning
暂未订购
An Explainable Deep Learning Framework for Kidney Cancer Classification Using VGG16 and Layer-Wise Relevance Propagation on CT Images
18
作者 Asma Batool Fahad Ahmed +4 位作者 Naila Sammar Naz Ayman Altameem Ateeq Ur Rehman Khan Muhammad Adnan Ahmad Almogren 《Computer Modeling in Engineering & Sciences》 2025年第12期4129-4152,共24页
Early and accurate cancer diagnosis through medical imaging is crucial for guiding treatment and enhancing patient survival.However,many state-of-the-art deep learning(DL)methods remain opaque and lack clinical interp... Early and accurate cancer diagnosis through medical imaging is crucial for guiding treatment and enhancing patient survival.However,many state-of-the-art deep learning(DL)methods remain opaque and lack clinical interpretability.This paper presents an explainable artificial intelligence(XAI)framework that combines a fine-tuned Visual Geometry Group 16-layer network(VGG16)convolutional neural network with layer-wise relevance propagation(LRP)to deliver high-performance classification and transparent decision support.This approach is evaluated on the publicly available Kaggle kidney cancer imaging dataset,which comprises labeled cancerous and noncancerous kidney scans.The proposed model achieved 98.75%overall accuracy,with precision,recall,and F1-score each exceeding 98%on an independent test set.Crucially,LRP-derived heatmaps consistently localize anatomically and pathologically significant regions such as tumor margins in agreement with established clinical criteria.The proposed framework enhances clinician trust by delivering pixel-level justifications alongside state-of-the-art predictive performance.It facilitates informed decision-making,thereby addressing a key barrier to the clinical adoption of DL in oncology. 展开更多
关键词 explainable artificial intelligence(xai) deep learning VGG16 layer-wise relevance propagation(LRP) kidney cancer medical imaging
在线阅读 下载PDF
Explainable AI Based Multi-Task Learning Method for Stroke Prognosis
19
作者 Nan Ding Xingyu Zeng +1 位作者 Jianping Wu Liutao Zhao 《Computers, Materials & Continua》 2025年第9期5299-5315,共17页
Predicting the health status of stroke patients at different stages of the disease is a critical clinical task.The onset and development of stroke are affected by an array of factors,encompassing genetic predispositio... Predicting the health status of stroke patients at different stages of the disease is a critical clinical task.The onset and development of stroke are affected by an array of factors,encompassing genetic predisposition,environmental exposure,unhealthy lifestyle habits,and existing medical conditions.Although existing machine learning-based methods for predicting stroke patients’health status have made significant progress,limitations remain in terms of prediction accuracy,model explainability,and system optimization.This paper proposes a multi-task learning approach based on Explainable Artificial Intelligence(XAI)for predicting the health status of stroke patients.First,we design a comprehensive multi-task learning framework that utilizes the task correlation of predicting various health status indicators in patients,enabling the parallel prediction of multiple health indicators.Second,we develop a multi-task Area Under Curve(AUC)optimization algorithm based on adaptive low-rank representation,which removes irrelevant information from the model structure to enhance the performance of multi-task AUC optimization.Additionally,the model’s explainability is analyzed through the stability analysis of SHAP values.Experimental results demonstrate that our approach outperforms comparison algorithms in key prognostic metrics F1 score and Efficiency. 展开更多
关键词 explainable ai stroke prognosis multi-task learning AUC optimization
在线阅读 下载PDF
An explainable feature selection framework for web phishing detection with machine learning
20
作者 Sakib Shahriar Shafin 《Data Science and Management》 2025年第2期127-136,共10页
In the evolving landscape of cyber threats,phishing attacks pose significant challenges,particularly through deceptive webpages designed to extract sensitive information under the guise of legitimacy.Conventional and ... In the evolving landscape of cyber threats,phishing attacks pose significant challenges,particularly through deceptive webpages designed to extract sensitive information under the guise of legitimacy.Conventional and machine learning(ML)-based detection systems struggle to detect phishing websites owing to their constantly changing tactics.Furthermore,newer phishing websites exhibit subtle and expertly concealed indicators that are not readily detectable.Hence,effective detection depends on identifying the most critical features.Traditional feature selection(FS)methods often struggle to enhance ML model performance and instead decrease it.To combat these issues,we propose an innovative method using explainable AI(XAI)to enhance FS in ML models and improve the identification of phishing websites.Specifically,we employ SHapley Additive exPlanations(SHAP)for global perspective and aggregated local interpretable model-agnostic explanations(LIME)to deter-mine specific localized patterns.The proposed SHAP and LIME-aggregated FS(SLA-FS)framework pinpoints the most informative features,enabling more precise,swift,and adaptable phishing detection.Applying this approach to an up-to-date web phishing dataset,we evaluate the performance of three ML models before and after FS to assess their effectiveness.Our findings reveal that random forest(RF),with an accuracy of 97.41%and XGBoost(XGB)at 97.21%significantly benefit from the SLA-FS framework,while k-nearest neighbors lags.Our framework increases the accuracy of RF and XGB by 0.65%and 0.41%,respectively,outperforming traditional filter or wrapper methods and any prior methods evaluated on this dataset,showcasing its potential. 展开更多
关键词 Webpage phishing explainable ai Feature selection Machine learning
在线阅读 下载PDF
上一页 1 2 11 下一页 到第
使用帮助 返回顶部