期刊文献+
共找到103篇文章
< 1 2 6 >
每页显示 20 50 100
ExplainableDetector:Exploring transformer-based language modeling approach for SMS spam detection with explainability analysis
1
作者 Mohammad Amaz Uddin Muhammad Nazrul Islam +2 位作者 Leandros Maglaras Helge Janicke Iqbal H.Sarker 《Digital Communications and Networks》 2025年第5期1504-1518,共15页
Short Message Service(SMS)is a widely used and cost-effective communication medium that has unfortunately become a frequent target for unsolicited messages-commonly known as SMS spam.With the rapid adoption of smartph... Short Message Service(SMS)is a widely used and cost-effective communication medium that has unfortunately become a frequent target for unsolicited messages-commonly known as SMS spam.With the rapid adoption of smartphones and increased Internet connectivity,SMS spam has emerged as a prevalent threat.Spammers have recognized the critical role SMS plays in today’s modern communication,making it a prime target for abuse.As cybersecurity threats continue to evolve,the volume of SMS spam has increased substantially in recent years.Moreover,the unstructured format of SMS data creates significant challenges for SMS spam detection,making it more difficult to successfully combat spam attacks.In this paper,we present an optimized and fine-tuned transformer-based Language Model to address the problem of SMS spam detection.We use a benchmark SMS spam dataset to analyze this spam detection model.Additionally,we utilize pre-processing techniques to obtain clean and noise-free data and address class imbalance problem by leveraging text augmentation techniques.The overall experiment showed that our optimized fine-tuned BERT(Bidirectional Encoder Representations from Transformers)variant model RoBERTa obtained high accuracy with 99.84%.To further enhance model transparency,we incorporate Explainable Artificial Intelligence(XAI)techniques that compute positive and negative coefficient scores,offering insight into the model’s decision-making process.Additionally,we evaluate the performance of traditional machine learning models as a baseline for comparison.This comprehensive analysis demonstrates the significant impact language models can have on addressing complex text-based challenges within the cybersecurity landscape. 展开更多
关键词 CYBERSECURITY Machine learning Large language model Spam detection Text analytics Explainable ai Fine-tuning TRANSFORMER
在线阅读 下载PDF
Graph-Based Intrusion Detection with Explainable Edge Classification Learning
2
作者 Jaeho Shin Jaekwang Kim 《Computers, Materials & Continua》 2026年第1期610-635,共26页
Network attacks have become a critical issue in the internet security domain.Artificial intelligence technology-based detection methodologies have attracted attention;however,recent studies have struggled to adapt to ... Network attacks have become a critical issue in the internet security domain.Artificial intelligence technology-based detection methodologies have attracted attention;however,recent studies have struggled to adapt to changing attack patterns and complex network environments.In addition,it is difficult to explain the detection results logically using artificial intelligence.We propose a method for classifying network attacks using graph models to explain the detection results.First,we reconstruct the network packet data into a graphical structure.We then use a graph model to predict network attacks using edge classification.To explain the prediction results,we observed numerical changes by randomly masking and calculating the importance of neighbors,allowing us to extract significant subgraphs.Our experiments on six public datasets demonstrate superior performance with an average F1-score of 0.960 and accuracy of 0.964,outperforming traditional machine learning and other graph models.The visual representation of the extracted subgraphs highlights the neighboring nodes that have the greatest impact on the results,thus explaining detection.In conclusion,this study demonstrates that graph-based models are suitable for network attack detection in complex environments,and the importance of graph neighbors can be calculated to efficiently analyze the results.This approach can contribute to real-world network security analyses and provide a new direction in the field. 展开更多
关键词 Intrusion detection graph neural network explainable ai network attacks GraphSAGE
在线阅读 下载PDF
Cascading Class Activation Mapping:A Counterfactual Reasoning-Based Explainable Method for Comprehensive Feature Discovery
3
作者 Seoyeon Choi Hayoung Kim Guebin Choi 《Computer Modeling in Engineering & Sciences》 2026年第2期1043-1069,共27页
Most Convolutional Neural Network(CNN)interpretation techniques visualize only the dominant cues that the model relies on,but there is no guarantee that these represent all the evidence the model uses for classificati... Most Convolutional Neural Network(CNN)interpretation techniques visualize only the dominant cues that the model relies on,but there is no guarantee that these represent all the evidence the model uses for classification.This limitation becomes critical when hidden secondary cues—potentially more meaningful than the visualized ones—remain undiscovered.This study introduces CasCAM(Cascaded Class Activation Mapping)to address this fundamental limitation through counterfactual reasoning.By asking“if this dominant cue were absent,what other evidence would the model use?”,CasCAM progressively masks the most salient features and systematically uncovers the hierarchy of classification evidence hidden beneath them.Experimental results demonstrate that CasCAM effectively discovers the full spectrum of reasoning evidence and can be universally applied with nine existing interpretation methods. 展开更多
关键词 Explainable ai class activation mapping counterfactual reasoning shortcut learning feature discovery
在线阅读 下载PDF
Subtle Micro-Tremor Fusion:A Cross-Modal AI Framework for Early Detection of Parkinson’s Disease from Voice and Handwriting Dynamics
4
作者 H.Ahmed Naglaa E.Ghannam +1 位作者 H.Mancy Esraa A.Mahareek 《Computer Modeling in Engineering & Sciences》 2026年第2期1070-1099,共30页
Parkinson’s disease remains a major clinical issue in terms of early detection,especially during its prodromal stage when symptoms are not evident or not distinct.To address this problem,we proposed a new deep learni... Parkinson’s disease remains a major clinical issue in terms of early detection,especially during its prodromal stage when symptoms are not evident or not distinct.To address this problem,we proposed a new deep learning 2-based approach for detecting Parkinson’s disease before any of the overt symptoms develop during their prodromal stage.We used 5 publicly accessible datasets,including UCI Parkinson’s Voice,Spiral Drawings,PaHaW,NewHandPD,and PPMI,and implemented a dual stream CNN–BiLSTM architecture with Fisher-weighted feature merging and SHAP-based explanation.The findings reveal that the model’s performance was superior and achieved 98.2%,a F1-score of 0.981,and AUC of 0.991 on the UCI Voice dataset.The model’s performance on the remaining datasets was also comparable,with up to a 2–7 percent betterment in accuracy compared to existing strong models such as CNN–RNN–MLP,ILN–GNet,and CASENet.Across the evidence,the findings back the diagnostic promise of micro-tremor assessment and demonstrate that combining temporal and spatial features with a scatter-based segment for a multi-modal approach can be an effective and scalable platform for an“early,”interpretable PD screening system. 展开更多
关键词 Early Parkinson diagnosis explainable ai(Xai) feature-level fusion handwriting analysis microtremor detection multimodal fusion Parkinson’s disease prodromal detection voice signal processing
在线阅读 下载PDF
Explainable Ensemble Learning Framework for Early Detection of Autism Spectrum Disorder:Enhancing Trust,Interpretability and Reliability in AI-Driven Healthcare
5
作者 Menwa Alshammeri Noshina Tariq +2 位作者 NZ Jhanji Mamoona Humayun Muhammad Attique Khan 《Computer Modeling in Engineering & Sciences》 2026年第1期1233-1265,共33页
Artificial Intelligence(AI)is changing healthcare by helping with diagnosis.However,for doctors to trust AI tools,they need to be both accurate and easy to understand.In this study,we created a new machine learning sy... Artificial Intelligence(AI)is changing healthcare by helping with diagnosis.However,for doctors to trust AI tools,they need to be both accurate and easy to understand.In this study,we created a new machine learning system for the early detection of Autism Spectrum Disorder(ASD)in children.Our main goal was to build a model that is not only good at predicting ASD but also clear in its reasoning.For this,we combined several different models,including Random Forest,XGBoost,and Neural Networks,into a single,more powerful framework.We used two different types of datasets:(i)a standard behavioral dataset and(ii)a more complex multimodal dataset with images,audio,and physiological information.The datasets were carefully preprocessed for missing values,redundant features,and dataset imbalance to ensure fair learning.The results outperformed the state-of-the-art with a Regularized Neural Network,achieving 97.6%accuracy on behavioral data.Whereas,on the multimodal data,the accuracy is 98.2%.Other models also did well with accuracies consistently above 96%.We also used SHAP and LIME on a behavioral dataset for models’explainability. 展开更多
关键词 Autism spectrum disorder(ASD) artificial intelligence in healthcare explainable ai(Xai) ensemble learning machine learning early diagnosis model interpretability SHAP LIME predictive analytics ethical ai healthcare trustworthiness
在线阅读 下载PDF
Explainable Hybrid AI Model for DDoS Detection in SDN-Enabled Internet of Vehicle
6
作者 Oumaima Saidani Nazia Azim +5 位作者 Ateeq Ur Rehman Akbayan Bekarystankyzy Hala Abdel Hameed Mostafa Mohamed R.Abonazel Ehab Ebrahim Mohamed Ebrahim Sarah Abu Ghazalah 《Computers, Materials & Continua》 2026年第5期499-526,共28页
The convergence of Software Defined Networking(SDN)in Internet of Vehicles(IoV)enables a flexible,programmable,and globally visible network control architecture across Road Side Units(RSUs),cloud servers,and automobil... The convergence of Software Defined Networking(SDN)in Internet of Vehicles(IoV)enables a flexible,programmable,and globally visible network control architecture across Road Side Units(RSUs),cloud servers,and automobiles.While this integration enhances scalability and safety,it also raises sophisticated cyberthreats,particularly Distributed Denial of Service(DDoS)attacks.Traditional rule-based anomaly detection methods often struggle to detectmodern low-and-slowDDoS patterns,thereby leading to higher false positives.To this end,this study proposes an explainable hybrid framework to detect DDoS attacks in SDN-enabled IoV(SDN-IoV).The hybrid framework utilizes a Residual Network(ResNet)to capture spatial correlations and a Bi-Long Short-Term Memory(BiLSTM)to capture both forward and backward temporal dependencies in high-dimensional input patterns.To ensure transparency and trustworthiness,themodel integrates the Explainable AI(XAI)technique,i.e.,SHapley Additive exPlanations(SHAP).SHAP highlights the contribution of each feature during the decision-making process,facilitating security analysts to understand the rationale behind the attack classification decision.The SDN-IoV environment is created in Mininet-WiFi and SUMO,and the hybrid model is trained on the CICDDoS2019 security dataset.The simulation results reveal the efficacy of the proposed model in terms of standard performance metrics compared to similar baseline methods. 展开更多
关键词 Explainable ai software defined networking Internet of vehicles DDoS attack ResNet BiLSTM
在线阅读 下载PDF
Malware Detection and AI Integration:A Systematic Review of Current Trends and Future Directions
7
作者 M.Mohsin Raza Muhammad Umair +6 位作者 Imran Arshad Choudhry Muhammad Qasim Muhammad Tahir Naseem Mamoona Naveed Asghar Daniel Gavilanes Manuel Masias Vergara Imran Ashraf 《Computer Modeling in Engineering & Sciences》 2026年第3期80-119,共40页
Over the past decade,the landscape of cybersecurity has been increasingly shaped by the growing sophistication and frequency of malware attacks.Traditional detection techniques,while still in use,often fall short when... Over the past decade,the landscape of cybersecurity has been increasingly shaped by the growing sophistication and frequency of malware attacks.Traditional detection techniques,while still in use,often fall short when confronted with modern threats that use advanced evasion strategies.This systematic review critically examines recent developments in malware detection,with a particular emphasis on the role of artificial intelligence(AI)and machine learning(ML)in enhancing detection capabilities.Drawing on literature published between 2019 and 2025,this study reviews 105 peer-reviewed contributions from prominent digital libraries including IEEE Xplore,SpringerLink,ScienceDirect,and ACM Digital Library.In doing so,it explores the evolution of malware,evaluates detection methods,assesses the quality and limitations of widely used datasets,and identifies key challenges facing the field.Unlike existing surveys,this work offers a structured comparison of AI-driven frameworks and provides a detailed account of emerging techniques such as hybrid detection frameworks and image-based analysis.The findings indicate that AIbased models trained on diverse,high-quality datasets consistently outperform conventional methods,particularly when supported by feature engineering,explainable AI and a multi-faceted strategy.The review concludes by outlining future research directions,including the need for standardized datasets,enhanced adversarial robustness,and the integration of privacy-preserving mechanisms in malware detection systems. 展开更多
关键词 CYBERSECURITY machine learning malware dataset malware detection feature selection deep learning explainable ai(Xai)
在线阅读 下载PDF
Optimizing UCS Prediction Models through XAI-Based Feature Selection in Soil Stabilization
8
作者 Ahmed Mohammed Awad Mohammed Omayma Husain +5 位作者 Mosab Hamdan Abdalmomen Mohammed Abdullah Ansari Atef Badr Abubakar Elsafi Abubakr Siddig 《Computer Modeling in Engineering & Sciences》 2026年第2期524-549,共26页
Unconfined Compressive Strength(UCS)is a key parameter for the assessment of the stability and performance of stabilized soils,yet traditional laboratory testing is both time and resource intensive.In this study,an in... Unconfined Compressive Strength(UCS)is a key parameter for the assessment of the stability and performance of stabilized soils,yet traditional laboratory testing is both time and resource intensive.In this study,an interpretable machine learning approach to UCS prediction is presented,pairing five models(Random Forest(RF),Gradient Boosting(GB),Extreme Gradient Boosting(XGB),CatBoost,and K-Nearest Neighbors(KNN))with SHapley Additive exPlanations(SHAP)for enhanced interpretability and to guide feature removal.A complete dataset of 12 geotechnical and chemical parameters,i.e.,Atterberg limits,compaction properties,stabilizer chemistry,dosage,curing time,was used to train and test the models.R2,RMSE,MSE,and MAE were used to assess performance.Initial results with all 12 features indicated that boosting-based models(GB,XGB,CatBoost)exhibited the highest predictive accuracy(R^(2)=0.93)with satisfactory generalization on test data,followed by RF and KNN.SHAP analysis consistently picked CaO content,curing time,stabilizer dosage,and compaction parameters as the most important features,aligning with established soil stabilization mechanisms.Models were then re-trained on the top 8 and top 5 SHAP-ranked features.Interestingly,GB,XGB,and CatBoost maintained comparable accuracy with reduced input sets,while RF was moderately sensitive and KNN was somewhat better owing to reduced dimensionality.The findings confirm that feature reduction through SHAP enables cost-effective UCS prediction through the reduction of laboratory test requirements without significant accuracy loss.The suggested hybrid approach offers an explainable,interpretable,and cost-effective tool for geotechnical engineering practice. 展开更多
关键词 Explainable ai feature selection machine learning SHAP analysis soil stabilization unconfined compressive strength
在线阅读 下载PDF
An explainable deep learning approach to enhance the prediction of shield tunnel deviation
9
作者 Jiajie Zhen Fengwen Lai +4 位作者 Ming Huang Junjie Zheng Jim S.Shiau Ping Wang Jinhuo Zheng 《Journal of Rock Mechanics and Geotechnical Engineering》 2026年第1期566-579,共14页
Although machine learning models have achieved high enough accuracy in predicting shield position deviations,their“black box”nature makes the prediction mechanisms and decision-making processes opaque,leading to wea... Although machine learning models have achieved high enough accuracy in predicting shield position deviations,their“black box”nature makes the prediction mechanisms and decision-making processes opaque,leading to weaker explanations and practicability.This study introduces a novel explainable deep learning framework comprising the Informer model with enhanced attention mechanisms(EAMInfor)and deep learning important features(DeepLIFT),aimed at improving the prediction accuracy of shield position deviations and providing interpretability for predictive results.The EAMInfor model attempts to integrate channel attention,spatial attention,and simple attention modules to improve the Informer model's performance.The framework is tested with the four different geological conditions datasets generated from the Xiamen metro line 3,China.Results show that the EAMInfor model outperforms the traditional Informer and comparison models.The analysis with the DeepLIFT method indicates that the push thrust of push cylinder and the earth chamber pressure are the most significant features,while the stroke length of the push cylinder demonstrated lower importance.Furthermore,the variation trends in the significance of data points within input sequences exhibit substantial differences between single and composite strata.This framework not only improves predictive accuracy but also strengthens the credibility and reliability of the results. 展开更多
关键词 Shield tunnel position deviation Machine learning Explainable ai Deep learning important features
在线阅读 下载PDF
Bridging AI and Cyber Defense:A Stacked Ensemble Deep Learning Model with Explainable Insights
10
作者 Faisal Albalwy Muhannad Almohaimeed 《Computers, Materials & Continua》 2026年第5期559-578,共20页
Intrusion detection in Internet of Things(IoT)environments presents challenges due to heterogeneous devices,diverse attack vectors,and highly imbalanced datasets.Existing research on the ToN-IoT dataset has largely em... Intrusion detection in Internet of Things(IoT)environments presents challenges due to heterogeneous devices,diverse attack vectors,and highly imbalanced datasets.Existing research on the ToN-IoT dataset has largely emphasized binary classification and single-model pipelines,which often showstrong performance but limited generalizability,probabilistic reliability,and operational interpretability.This study proposes a stacked ensemble deep learning framework that integrates random forest,extreme gradient boosting,and a deep neural network as base learners,with CatBoost as the meta-learner.On the ToN-IoT Linux process dataset,the model achieved near-perfect discrimination(macro area under the curve=0.998),robust calibration,and superior F1-scores compared with standalone classifiers.Interpretability was achieved through SHapley Additive exPlanations–based feature attribution,which highlights actionable drivers ofmalicious behavior,such as command-line patterns,process scheduling anomalies,and CPU usage spikes,and aligns these indicators with MITRE ATT&CK tactics and techniques.Complementary analyses,including cumulative lift and sensitivity-specificity trade-offs,revealed the framework’s suitability for deployment in security operations centers,where calibrated risk scores,transparent explanations,and resource-aware triage are essential.These contributions bridge methodological rigor in artificial intelligence/machine learning with operational priorities in cybersecurity,delivering a scalable and explainable intrusion detection system suitable for real-world deployment in IoT environments. 展开更多
关键词 CYBERSECURITY IoT intrusion detection stacked ensemble learning deep learning explainable ai(Xai) probability calibration SHAP interpretability ToN-IoT dataset MITRE ATT&CK
在线阅读 下载PDF
A Study on the Explainability of Thyroid Cancer Prediction:SHAP Values and Association-Rule Based Feature Integration Framework
11
作者 Sujithra Sankar S.Sathyalakshmi 《Computers, Materials & Continua》 SCIE EI 2024年第5期3111-3138,共28页
In the era of advanced machine learning techniques,the development of accurate predictive models for complex medical conditions,such as thyroid cancer,has shown remarkable progress.Accurate predictivemodels for thyroi... In the era of advanced machine learning techniques,the development of accurate predictive models for complex medical conditions,such as thyroid cancer,has shown remarkable progress.Accurate predictivemodels for thyroid cancer enhance early detection,improve resource allocation,and reduce overtreatment.However,the widespread adoption of these models in clinical practice demands predictive performance along with interpretability and transparency.This paper proposes a novel association-rule based feature-integratedmachine learning model which shows better classification and prediction accuracy than present state-of-the-artmodels.Our study also focuses on the application of SHapley Additive exPlanations(SHAP)values as a powerful tool for explaining thyroid cancer prediction models.In the proposed method,the association-rule based feature integration framework identifies frequently occurring attribute combinations in the dataset.The original dataset is used in trainingmachine learning models,and further used in generating SHAP values fromthesemodels.In the next phase,the dataset is integrated with the dominant feature sets identified through association-rule based analysis.This new integrated dataset is used in re-training the machine learning models.The new SHAP values generated from these models help in validating the contributions of feature sets in predicting malignancy.The conventional machine learning models lack interpretability,which can hinder their integration into clinical decision-making systems.In this study,the SHAP values are introduced along with association-rule based feature integration as a comprehensive framework for understanding the contributions of feature sets inmodelling the predictions.The study discusses the importance of reliable predictive models for early diagnosis of thyroid cancer,and a validation framework of explainability.The proposed model shows an accuracy of 93.48%.Performance metrics such as precision,recall,F1-score,and the area under the receiver operating characteristic(AUROC)are also higher than the baseline models.The results of the proposed model help us identify the dominant feature sets that impact thyroid cancer classification and prediction.The features{calcification}and{shape}consistently emerged as the top-ranked features associated with thyroid malignancy,in both association-rule based interestingnessmetric values and SHAPmethods.The paper highlights the potential of the rule-based integrated models with SHAP in bridging the gap between the machine learning predictions and the interpretability of this prediction which is required for real-world medical applications. 展开更多
关键词 Explainable ai machine learning clinical decision support systems thyroid cancer association-rule based framework SHAP values classification and prediction
暂未订购
Unveiling dominant factors for gully distribution in wildfire-affected areas using explainable AI:A case study of Xiangjiao catchment,Southwest China 被引量:2
12
作者 ZHOU Ruichen HU Xiewen +3 位作者 XI Chuanjie HE Kun DENG Lin LUO Gang 《Journal of Mountain Science》 2025年第8期2765-2792,共28页
Wildfires significantly disrupt the physical and hydrologic conditions of the environment,leading to vegetation loss and altered surface geo-material properties.These complex dynamics promote post-fire gully erosion,y... Wildfires significantly disrupt the physical and hydrologic conditions of the environment,leading to vegetation loss and altered surface geo-material properties.These complex dynamics promote post-fire gully erosion,yet the key conditioning factors(e.g.,topography,hydrology)remain insufficiently understood.This study proposes a novel artificial intelligence(AI)framework that integrates four machine learning(ML)models with Shapley Additive Explanations(SHAP)method,offering a hierarchical perspective from global to local on the dominant factors controlling gully distribution in wildfireaffected areas.In a case study of Xiangjiao catchment burned on March 28,2020,in Muli County in Sichuan Province of Southwest China,we derived 21 geoenvironmental factors to assess the susceptibility of post-fire gully erosion using logistic regression(LR),support vector machine(SVM),random forest(RF),and convolutional neural network(CNN)models.SHAP-based model interpretation revealed eight key conditioning factors:topographic position index(TPI),topographic wetness index(TWI),distance to stream,mean annual precipitation,differenced normalized burn ratio(d NBR),land use/cover,soil type,and distance to road.Comparative model evaluation demonstrated that reduced-variable models incorporating these dominant factors achieved accuracy comparable to that of the initial-variable models,with AUC values exceeding 0.868 across all ML algorithms.These findings provide critical insights into gully erosion behavior in wildfire-affected areas,supporting the decision-making process behind environmental management and hazard mitigation. 展开更多
关键词 Gully erosion susceptibility Explainable ai WILDFIRE Geo-environmental factor Machine learning
原文传递
Explainable artificial intelligence for rock discontinuity detection from point cloud with ensemble methods 被引量:1
13
作者 Mehmet Akif Günen 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第12期7590-7611,共22页
This study presents a framework for the semi-automatic detection of rock discontinuities using a threedimensional(3D)point cloud(PC).The process begins by selecting an appropriate neighborhood size,a critical step for... This study presents a framework for the semi-automatic detection of rock discontinuities using a threedimensional(3D)point cloud(PC).The process begins by selecting an appropriate neighborhood size,a critical step for feature extraction from the PC.The effects of different neighborhood sizes(k=5,10,20,50,and 100)have been evaluated to assess their impact on classification performance.After that,17 geometric and spatial features were extracted from the PC.Next,ensemble methods,AdaBoost.M2,random forest,and decision tree,have been compared with Artificial Neural Networks to classify the main discontinuity sets.The McNemar test indicates that the classifiers are statistically significant.The random forest classifier consistently achieves the highest performance with an accuracy exceeding 95%when using a neighborhood size of k=100,while recall,F-score,and Cohen's Kappa also demonstrate high success.SHapley Additive exPlanations(SHAP),an Explainable AI technique,has been used to evaluate feature importance and improve the explainability of black-box machine learning models in the context of rock discontinuity classification.The analysis reveals that features such as normal vectors,verticality,and Z-values have the greatest influence on identifying main discontinuity sets,while linearity,planarity,and eigenvalues contribute less,making the model more transparent and easier to understand.After classification,individual discontinuity sets were detected using a revised DBSCAN from the main discontinuity sets.Finally,the orientation parameters of the plane fitted to each discontinuity were derived from the plane parameters obtained using the Random Sample Consensus(RANSAC).Two real-world datasets(obtained from SfM and LiDAR)and one synthetic dataset were used to validate the proposed method,which successfully identified rock discontinuities and their orientation parameters(dip angle/direction). 展开更多
关键词 Point cloud(PC) Rock discontinuity Explainable ai techniques Machine learning Dip/dip direction
在线阅读 下载PDF
Research Trends and Networks in Self-Explaining Autonomous Systems:A Bibliometric Study
14
作者 Oscar Peña-Cáceres Elvis Garay-Silupu +1 位作者 Darwin Aguilar-Chuquizuta Henry Silva-Marchan 《Computers, Materials & Continua》 2025年第8期2151-2188,共38页
Self-Explaining Autonomous Systems(SEAS)have emerged as a strategic frontier within Artificial Intelligence(AI),responding to growing demands for transparency and interpretability in autonomous decisionmaking.This stu... Self-Explaining Autonomous Systems(SEAS)have emerged as a strategic frontier within Artificial Intelligence(AI),responding to growing demands for transparency and interpretability in autonomous decisionmaking.This study presents a comprehensive bibliometric analysis of SEAS research published between 2020 and February 2025,drawing upon 1380 documents indexed in Scopus.The analysis applies co-citation mapping,keyword co-occurrence,and author collaboration networks using VOSviewer,MASHA,and Python to examine scientific production,intellectual structure,and global collaboration patterns.The results indicate a sustained annual growth rate of 41.38%,with an h-index of 57 and an average of 21.97 citations per document.A normalized citation rate was computed to address temporal bias,enabling balanced evaluation across publication cohorts.Thematic analysis reveals four consolidated research fronts:interpretability in machine learning,explainability in deep neural networks,transparency in generative models,and optimization strategies in autonomous control.Author co-citation analysis identifies four distinct research communities,and keyword evolution shows growing interdisciplinary links with medicine,cybersecurity,and industrial automation.The United States leads in scientific output and citation impact at the geographical level,while countries like India and China show high productivity with varied influence.However,international collaboration remains limited at 7.39%,reflecting a fragmented research landscape.As discussed in this study,SEAS research is expanding rapidly yet remains epistemologically dispersed,with uneven integration of ethical and human-centered perspectives.This work offers a structured and data-driven perspective on SEAS development,highlights key contributors and thematic trends,and outlines critical directions for advancing responsible and transparent autonomous systems. 展开更多
关键词 Self-explaining autonomous systems explainable ai machine learning deep learning artificial intelligence
在线阅读 下载PDF
An explainable feature selection framework for web phishing detection with machine learning
15
作者 Sakib Shahriar Shafin 《Data Science and Management》 2025年第2期127-136,共10页
In the evolving landscape of cyber threats,phishing attacks pose significant challenges,particularly through deceptive webpages designed to extract sensitive information under the guise of legitimacy.Conventional and ... In the evolving landscape of cyber threats,phishing attacks pose significant challenges,particularly through deceptive webpages designed to extract sensitive information under the guise of legitimacy.Conventional and machine learning(ML)-based detection systems struggle to detect phishing websites owing to their constantly changing tactics.Furthermore,newer phishing websites exhibit subtle and expertly concealed indicators that are not readily detectable.Hence,effective detection depends on identifying the most critical features.Traditional feature selection(FS)methods often struggle to enhance ML model performance and instead decrease it.To combat these issues,we propose an innovative method using explainable AI(XAI)to enhance FS in ML models and improve the identification of phishing websites.Specifically,we employ SHapley Additive exPlanations(SHAP)for global perspective and aggregated local interpretable model-agnostic explanations(LIME)to deter-mine specific localized patterns.The proposed SHAP and LIME-aggregated FS(SLA-FS)framework pinpoints the most informative features,enabling more precise,swift,and adaptable phishing detection.Applying this approach to an up-to-date web phishing dataset,we evaluate the performance of three ML models before and after FS to assess their effectiveness.Our findings reveal that random forest(RF),with an accuracy of 97.41%and XGBoost(XGB)at 97.21%significantly benefit from the SLA-FS framework,while k-nearest neighbors lags.Our framework increases the accuracy of RF and XGB by 0.65%and 0.41%,respectively,outperforming traditional filter or wrapper methods and any prior methods evaluated on this dataset,showcasing its potential. 展开更多
关键词 Webpage phishing Explainable ai Feature selection Machine learning
在线阅读 下载PDF
Next-Generation Lightweight Explainable AI for Cybersecurity: A Review on Transparency and Real-Time Threat Mitigation
16
作者 Khulud Salem Alshudukhi Sijjad Ali +1 位作者 Mamoona Humayun Omar Alruwaili 《Computer Modeling in Engineering & Sciences》 2025年第12期3029-3085,共57页
Problem:The integration of Artificial Intelligence(AI)into cybersecurity,while enhancing threat detection,is hampered by the“black box”nature of complex models,eroding trust,accountability,and regulatory compliance.... Problem:The integration of Artificial Intelligence(AI)into cybersecurity,while enhancing threat detection,is hampered by the“black box”nature of complex models,eroding trust,accountability,and regulatory compliance.Explainable AI(XAI)aims to resolve this opacity but introduces a critical newvulnerability:the adversarial exploitation of model explanations themselves.Gap:Current research lacks a comprehensive synthesis of this dual role of XAI in cybersecurity—as both a tool for transparency and a potential attack vector.There is a pressing need to systematically analyze the trade-offs between interpretability and security,evaluate defense mechanisms,and outline a path for developing robust,next-generation XAI frameworks.Solution:This review provides a systematic examination of XAI techniques(e.g.,SHAP,LIME,Grad-CAM)and their applications in intrusion detection,malware analysis,and fraud prevention.It critically evaluates the security risks posed by XAI,including model inversion and explanation-guided evasion attacks,and assesses corresponding defense strategies such as adversarially robust training,differential privacy,and secure-XAI deployment patterns.Contribution:Theprimary contributions of this work are:(1)a comparative analysis of XAI methods tailored for cybersecurity contexts;(2)an identification of the critical trade-off betweenmodel interpretability and security robustness;(3)a synthesis of defense mechanisms to mitigate XAI-specific vulnerabilities;and(4)a forward-looking perspective proposing future research directions,including quantum-safe XAI,hybrid neuro-symbolic models,and the integration of XAI into Zero Trust Architectures.This review serves as a foundational resource for developing transparent,trustworthy,and resilient AI-driven cybersecurity systems. 展开更多
关键词 Explainable ai(Xai) CYBERSECURITY adversarial robustness privacy-preserving techniques regulatory compliance zero trust architecture
在线阅读 下载PDF
PPG Based Digital Biomarker for Diabetes Detection with Multiset Spatiotemporal Feature Fusion and XAI
17
作者 Mubashir Ali Jingzhen Li Zedong Nie 《Computer Modeling in Engineering & Sciences》 2025年第12期4153-4177,共25页
Diabetes imposes a substantial burden on global healthcare systems.Worldwide,nearly half of individuals with diabetes remain undiagnosed,while conventional diagnostic techniques are often invasive,painful,and expensiv... Diabetes imposes a substantial burden on global healthcare systems.Worldwide,nearly half of individuals with diabetes remain undiagnosed,while conventional diagnostic techniques are often invasive,painful,and expensive.In this study,we propose a noninvasive approach for diabetes detection using photoplethysmography(PPG),which is widely integrated into modern wearable devices.First,we derived velocity plethysmography(VPG)and acceleration plethysmography(APG)signals from PPG to construct multi-channel waveform representations.Second,we introduced a novel multiset spatiotemporal feature fusion framework that integrates hand-crafted temporal,statistical,and nonlinear features with recursive feature elimination and deep feature extraction using a one-dimensional statistical convolutional neural network(1DSCNN).Finally,we developed an interpretable diabetes detection method based on XGBoost,with explainable artificial intelligence(XAI)techniques.Specifically,SHapley Additive exPlanations(SHAP)and Local InterpretableModel-agnostic Explanations(LIME)were employed to identify and interpret potential digital biomarkers associated with diabetes.To validate the proposed method,we extended the publicly available Guilin People’s Hospital dataset by incorporating in-house clinical data from ten subjects,thereby enhancing data diversity.A subject-independent cross-validation strategy was applied to ensure that the testing subjects remained independent of the training data for robust generalization.Compared with existing state-of-the-art methods,our approach achieved superior performance,with an area under the curve(AUC)of 80.5±15.9%,sensitivity of 77.2±7.5%,and specificity of 64.3±18.2%.These results demonstrate that the proposed approach provides a noninvasive,interpretable,and accessible solution for diabetes detection using PPG signals. 展开更多
关键词 Diabetes detection photoplethysmography(PPG) spatiotemporal fusion subject-independent validation digital biomarker explainable ai(Xai)
在线阅读 下载PDF
An AI-Enabled Framework for Transparency and Interpretability in Cardiovascular Disease Risk Prediction
18
作者 Isha Kiran Shahzad Ali +3 位作者 Sajawal ur Rehman Khan Musaed Alhussein Sheraz Aslam Khursheed Aurangzeb 《Computers, Materials & Continua》 2025年第3期5057-5078,共22页
Cardiovascular disease(CVD)remains a leading global health challenge due to its high mortality rate and the complexity of early diagnosis,driven by risk factors such as hypertension,high cholesterol,and irregular puls... Cardiovascular disease(CVD)remains a leading global health challenge due to its high mortality rate and the complexity of early diagnosis,driven by risk factors such as hypertension,high cholesterol,and irregular pulse rates.Traditional diagnostic methods often struggle with the nuanced interplay of these risk factors,making early detection difficult.In this research,we propose a novel artificial intelligence-enabled(AI-enabled)framework for CVD risk prediction that integrates machine learning(ML)with eXplainable AI(XAI)to provide both high-accuracy predictions and transparent,interpretable insights.Compared to existing studies that typically focus on either optimizing ML performance or using XAI separately for local or global explanations,our approach uniquely combines both local and global interpretability using Local Interpretable Model-Agnostic Explanations(LIME)and SHapley Additive exPlanations(SHAP).This dual integration enhances the interpretability of the model and facilitates clinicians to comprehensively understand not just what the model predicts but also why those predictions are made by identifying the contribution of different risk factors,which is crucial for transparent and informed decision-making in healthcare.The framework uses ML techniques such as K-nearest neighbors(KNN),gradient boosting,random forest,and decision tree,trained on a cardiovascular dataset.Additionally,the integration of LIME and SHAP provides patient-specific insights alongside global trends,ensuring that clinicians receive comprehensive and actionable information.Our experimental results achieve 98%accuracy with the Random Forest model,with precision,recall,and F1-scores of 97%,98%,and 98%,respectively.The innovative combination of SHAP and LIME sets a new benchmark in CVD prediction by integrating advanced ML accuracy with robust interpretability,fills a critical gap in existing approaches.This framework paves the way for more explainable and transparent decision-making in healthcare,ensuring that the model is not only accurate but also trustworthy and actionable for clinicians. 展开更多
关键词 Artificial Intelligence cardiovascular disease(CVD) explainability eXplainable ai(Xai) INTERPRETABILITY LIME machine learning(ML) SHAP
在线阅读 下载PDF
Explainable artificial intelligence model for the prediction of undrained shear strength
19
作者 Ho-Hong-Duy Nguyen Thanh-Nhan Nguyen +3 位作者 Thi-Anh-Thu Phan Ngoc-Thi Huynh Quoc-Dat Huynh Tan-Tai Trieu 《Theoretical & Applied Mechanics Letters》 2025年第3期284-295,共12页
Machine learning(ML)models are widely used for predicting undrained shear strength(USS),but interpretability has been a limitation in various studies.Therefore,this study introduced shapley additive explanations(SHAP)... Machine learning(ML)models are widely used for predicting undrained shear strength(USS),but interpretability has been a limitation in various studies.Therefore,this study introduced shapley additive explanations(SHAP)to clarify the contribution of each input feature in USS prediction.Three ML models,artificial neural network(ANN),extreme gradient boosting(XGBoost),and random forest(RF),were employed,with accuracy evaluated using mean squared error,mean absolute error,and coefficient of determination(R^(2)).The RF achieved the highest performance with an R^(2) of 0.82.SHAP analysis identified pre-consolidation stress as a key contributor to USS prediction.SHAP dependence plots reveal that the ANN captures smoother,linear feature-output relationships,while the RF handles complex,non-linear interactions more effectively.This suggests a non-linear relationship between USS and input features,with RF outperforming ANN.These findings highlight SHAP’s role in enhancing interpretability and promoting transparency and reliability in ML predictions for geotechnical applications. 展开更多
关键词 Prediction of undrained shear strength Explanation model Shapley additive explanation model Explainable ai
在线阅读 下载PDF
AI-Powered Threat Detection in Online Communities: A Multi-Modal Deep Learning Approach
20
作者 Ravi Teja Potla 《Journal of Computer and Communications》 2025年第2期155-171,共17页
The fast increase of online communities has brought about an increase in cyber threats inclusive of cyberbullying, hate speech, misinformation, and online harassment, making content moderation a pressing necessity. Tr... The fast increase of online communities has brought about an increase in cyber threats inclusive of cyberbullying, hate speech, misinformation, and online harassment, making content moderation a pressing necessity. Traditional single-modal AI-based detection systems, which analyze both text, photos, or movies in isolation, have established useless at taking pictures multi-modal threats, in which malicious actors spread dangerous content throughout a couple of formats. To cope with these demanding situations, we advise a multi-modal deep mastering framework that integrates Natural Language Processing (NLP), Convolutional Neural Networks (CNNs), and Long Short-Term Memory (LSTM) networks to become aware of and mitigate online threats effectively. Our proposed model combines BERT for text class, ResNet50 for photograph processing, and a hybrid LSTM-3-d CNN community for video content material analysis. We constructed a large-scale dataset comprising 500,000 textual posts, 200,000 offensive images, and 50,000 annotated motion pictures from more than one platform, which includes Twitter, Reddit, YouTube, and online gaming forums. The system became carefully evaluated using trendy gadget mastering metrics which include accuracy, precision, remember, F1-score, and ROC-AUC curves. Experimental outcomes demonstrate that our multi-modal method extensively outperforms single-modal AI classifiers, achieving an accuracy of 92.3%, precision of 91.2%, do not forget of 90.1%, and an AUC rating of 0.95. The findings validate the necessity of integrating multi-modal AI for actual-time, high-accuracy online chance detection and moderation. Future paintings will have consciousness on improving hostile robustness, enhancing scalability for real-world deployment, and addressing ethical worries associated with AI-driven content moderation. 展开更多
关键词 Multi-Model ai Deep Learning Natural Language Processing (NLP) Explainable ai (XI) Federated Learning Cyber Threat Detection LSTM CNNS
在线阅读 下载PDF
上一页 1 2 6 下一页 到第
使用帮助 返回顶部