期刊文献+
共找到544篇文章
< 1 2 28 >
每页显示 20 50 100
Calibrating Trust in Generative Artificial Intelligence:A Human-Centered Testing Framework with Adaptive Explainability
1
作者 Sewwandi Tennakoon Eric Danso Zhenjie Zhao 《Journal on Artificial Intelligence》 2025年第1期517-547,共31页
Generative Artificial Intelligence(GenAI)systems have achieved remarkable capabilities across text,code,and image generation;however,their outputs remain prone to errors,hallucinations,and biases.Users often overtrust... Generative Artificial Intelligence(GenAI)systems have achieved remarkable capabilities across text,code,and image generation;however,their outputs remain prone to errors,hallucinations,and biases.Users often overtrust these outputs due to limited transparency,which can lead to misuse and decision errors.This study addresses the challenge of calibrating trust in GenAI through a human centered testing framework enhanced with adaptive explainability.We introduce a methodology that adjusts explanations dynamically according to user expertise,model output confidence,and contextual risk factors,providing guidance that is informative but not overwhelming.The framework was evaluated using outputs from OpenAI’s Generative Pretrained Transformer 4(GPT-4)for text and code generation and Stable Diffusion,a deep generative image model,for image synthesis.The evaluation covered text,code,and visual modalities.A dataset of 5000 GenAI outputs was created and reviewed by a diverse participant group of 360 individuals categorized by expertise level.Results show that adaptive explanations improve error detection rates,reduce the mean squared trust calibration error,and maintain efficient decision making compared with both static and no explanation conditions.Theframework increased error detection by up to 16% across expertise levels,a gain that can provide practical benefits in high stakes fields.For example,in healthcare it may help identify diagnostic errors earlier,and in law it may prevent reliance on flawed evidence in judicial work.These improvements highlight the framework’s potential to make Artificial Intelligence(AI)deployment safer and more accountable.Visual analyses,including trust accuracy plots,reliability diagrams,and misconception maps,show that the adaptive approach reduces overtrust and reveals patterns of misunderstanding across modalities.Statistical results confirmthe robustness of thesefindings across novice,intermediate,and expert users.The study offers insights for designing explanations that balance completeness and simplicity to improve trust calibration and cognitive load.The approach has implications for safe and transparent GenAI deployment and can inform both AI interface design and policy development for responsible AI use. 展开更多
关键词 Generative AI trust calibration human-centered testing adaptive explainability user-centered AI model reliability human–AI collaboration
在线阅读 下载PDF
Identification of key factors and explainability analysis for surgical decision-making in hepatic alveolar echinococcosis assisted by machine learning
2
作者 Da-Long Zhu Alimu Tulahong +9 位作者 Chang Liu Ayinuer Aierken Wei Tan Rexiati Ruze Zhong-Dian Yuan Lei Yin Tie-Min Jiang Ren-Yong Lin Ying-Mei Shao Tuerganaili Aji 《World Journal of Gastroenterology》 2025年第37期109-121,共13页
BACKGROUND Echinococcosis,caused by Echinococcus parasites,includes alveolar echinococcosis(AE),the most lethal form,primarily affecting the liver with a 90%mortality rate without prompt treatment.While radical surger... BACKGROUND Echinococcosis,caused by Echinococcus parasites,includes alveolar echinococcosis(AE),the most lethal form,primarily affecting the liver with a 90%mortality rate without prompt treatment.While radical surgery combined with antiparasitic therapy is ideal,many patients present late,missing hepatectomy opportunities.Ex vivo liver resection and autotransplantation(ELRA)offers hope for such patients.Traditional surgical decision-making,relying on clinical experience,is prone to bias.Machine learning can enhance decision-making by identifying key factors influencing surgical choices.This study innovatively employs multiple machine learning methods by integrating various feature selection techniques and SHapley Additive exPlanations(SHAP)interpretive analysis to deeply explore the key decision factors influencing surgical strategies.AIM To determine the key preoperative factors influencing surgical decision-making in hepatic AE(HAE)using machine learning.METHODS This was a retrospective cohort study at the First Affiliated Hospital of Xinjiang Medical University(July 2010 to August 2024).There were 710 HAE patients(545 hepatectomy and 165 ELRA)with complete clinical data.Data included demographics,laboratory indicators,imaging,and pathology.Feature selection was performed using recursive feature elimination,minimum redundancy maximum relevance,and least absolute shrinkage and selection operator regression,with the intersection of these methods yielding 10 critical features.Eleven machinelearning algorithms were compared,with eXtreme Gradient Boosting(XGBoost)optimized using Bayesian optimization.Model interpretability was assessed using SHAP analysis.RESULTS The XGBoost model achieved an area under the curve of 0.935 in the training set and 0.734 in the validation set.The optimal threshold(0.28)yielded sensitivity of 93.6%and specificity of 90.9%.SHAP analysis identified type of vascular invasion as the most important feature,followed by platelet count and prothrombin time.Lesions invading the hepatic vein,inferior vena cava,or multiple vessels significantly increased the likelihood of ELRA.Calibration curves showed good agreement between predicted and observed probabilities(0.2-0.7 range).The model demonstrated high net clinical benefit in Decision Curve Analysis,with accuracy of 0.837,recall of 0.745,and F1 score of 0.788.CONCLUSION Vascular invasion is the dominant factor influencing the choice of surgical approach in HAE.Machine-learning models,particularly XGBoost,can provide transparent and data-driven support for personalized decision-making. 展开更多
关键词 Surgical approach HEPATECTOMY Ex vivo liver resection and autotransplantation Vascular invasion explainability
暂未订购
Differential Privacy Integrated Federated Learning for Power Systems:An Explainability-Driven Approach
3
作者 Zekun Liu Junwei Ma +3 位作者 Xin Gong Xiu Liu Bingbing Liu Long An 《Computers, Materials & Continua》 2025年第10期983-999,共17页
With the ongoing digitalization and intelligence of power systems,there is an increasing reliance on large-scale data-driven intelligent technologies for tasks such as scheduling optimization and load forecasting.Neve... With the ongoing digitalization and intelligence of power systems,there is an increasing reliance on large-scale data-driven intelligent technologies for tasks such as scheduling optimization and load forecasting.Nevertheless,power data often contains sensitive information,making it a critical industry challenge to efficiently utilize this data while ensuring privacy.Traditional Federated Learning(FL)methods can mitigate data leakage by training models locally instead of transmitting raw data.Despite this,FL still has privacy concerns,especially gradient leakage,which might expose users’sensitive information.Therefore,integrating Differential Privacy(DP)techniques is essential for stronger privacy protection.Even so,the noise from DP may reduce the performance of federated learning models.To address this challenge,this paper presents an explainability-driven power data privacy federated learning framework.It incorporates DP technology and,based on model explainability,adaptively adjusts privacy budget allocation and model aggregation,thus balancing privacy protection and model performance.The key innovations of this paper are as follows:(1)We propose an explainability-driven power data privacy federated learning framework.(2)We detail a privacy budget allocation strategy:assigning budgets per training round by gradient effectiveness and at model granularity by layer importance.(3)We design a weighted aggregation strategy that considers the SHAP value and model accuracy for quality knowledge sharing.(4)Experiments show the proposed framework outperforms traditional methods in balancing privacy protection and model performance in power load forecasting tasks. 展开更多
关键词 Power data federated learning differential privacy explainability
在线阅读 下载PDF
The 3D-Geoformer for ENSO studies:a Transformer-based model with integrated gradient methods for enhanced explainability 被引量:2
4
作者 Lu ZHOU Rong-Hua ZHANG 《Journal of Oceanology and Limnology》 2025年第6期1688-1708,共21页
Deep learning(DL)has become a crucial technique for predicting the El Niño-Southern Oscillation(ENSO)and evaluating its predictability.While various DL-based models have been developed for ENSO predictions,many f... Deep learning(DL)has become a crucial technique for predicting the El Niño-Southern Oscillation(ENSO)and evaluating its predictability.While various DL-based models have been developed for ENSO predictions,many fail to capture the coherent multivariate evolution within the coupled ocean-atmosphere system of the tropical Pacific.To address this three-dimensional(3D)limitation and represent ENSO-related ocean-atmosphere interactions more accurately,a novel this 3D multivariate prediction model was proposed based on a Transformer architecture,which incorporates a spatiotemporal self-attention mechanism.This model,named 3D-Geoformer,offers several advantages,enabling accurate ENSO predictions up to one and a half years in advance.Furthermore,an integrated gradient method was introduced into the model to identify the sources of predictability for sea surface temperature(SST)variability in the eastern equatorial Pacific.Results reveal that the 3D-Geoformer effectively captures ENSO-related precursors during the evolution of ENSO events,particularly the thermocline feedback processes and ocean temperature anomaly pathways on and off the equator.By extending DL-based ENSO predictions from one-dimensional Niño time series to 3D multivariate fields,the 3D-Geoformer represents a significant advancement in ENSO prediction.This study provides details in the model formulation,analysis procedures,sensitivity experiments,and illustrative examples,offering practical guidance for the application of the model in ENSO research. 展开更多
关键词 Transformer model 3 D-Geoformer El Niño-Southern Oscillation(ENSO)prediction explainable artificial intelligence(XAI) integrated gradient method
在线阅读 下载PDF
ExplainableDetector:Exploring transformer-based language modeling approach for SMS spam detection with explainability analysis
5
作者 Mohammad Amaz Uddin Muhammad Nazrul Islam +2 位作者 Leandros Maglaras Helge Janicke Iqbal H.Sarker 《Digital Communications and Networks》 2025年第5期1504-1518,共15页
Short Message Service(SMS)is a widely used and cost-effective communication medium that has unfortunately become a frequent target for unsolicited messages-commonly known as SMS spam.With the rapid adoption of smartph... Short Message Service(SMS)is a widely used and cost-effective communication medium that has unfortunately become a frequent target for unsolicited messages-commonly known as SMS spam.With the rapid adoption of smartphones and increased Internet connectivity,SMS spam has emerged as a prevalent threat.Spammers have recognized the critical role SMS plays in today’s modern communication,making it a prime target for abuse.As cybersecurity threats continue to evolve,the volume of SMS spam has increased substantially in recent years.Moreover,the unstructured format of SMS data creates significant challenges for SMS spam detection,making it more difficult to successfully combat spam attacks.In this paper,we present an optimized and fine-tuned transformer-based Language Model to address the problem of SMS spam detection.We use a benchmark SMS spam dataset to analyze this spam detection model.Additionally,we utilize pre-processing techniques to obtain clean and noise-free data and address class imbalance problem by leveraging text augmentation techniques.The overall experiment showed that our optimized fine-tuned BERT(Bidirectional Encoder Representations from Transformers)variant model RoBERTa obtained high accuracy with 99.84%.To further enhance model transparency,we incorporate Explainable Artificial Intelligence(XAI)techniques that compute positive and negative coefficient scores,offering insight into the model’s decision-making process.Additionally,we evaluate the performance of traditional machine learning models as a baseline for comparison.This comprehensive analysis demonstrates the significant impact language models can have on addressing complex text-based challenges within the cybersecurity landscape. 展开更多
关键词 CYBERSECURITY Machine learning Large language model Spam detection Text analytics Explainable AI Fine-tuning TRANSFORMER
在线阅读 下载PDF
A Study on the Explainability of Thyroid Cancer Prediction:SHAP Values and Association-Rule Based Feature Integration Framework
6
作者 Sujithra Sankar S.Sathyalakshmi 《Computers, Materials & Continua》 SCIE EI 2024年第5期3111-3138,共28页
In the era of advanced machine learning techniques,the development of accurate predictive models for complex medical conditions,such as thyroid cancer,has shown remarkable progress.Accurate predictivemodels for thyroi... In the era of advanced machine learning techniques,the development of accurate predictive models for complex medical conditions,such as thyroid cancer,has shown remarkable progress.Accurate predictivemodels for thyroid cancer enhance early detection,improve resource allocation,and reduce overtreatment.However,the widespread adoption of these models in clinical practice demands predictive performance along with interpretability and transparency.This paper proposes a novel association-rule based feature-integratedmachine learning model which shows better classification and prediction accuracy than present state-of-the-artmodels.Our study also focuses on the application of SHapley Additive exPlanations(SHAP)values as a powerful tool for explaining thyroid cancer prediction models.In the proposed method,the association-rule based feature integration framework identifies frequently occurring attribute combinations in the dataset.The original dataset is used in trainingmachine learning models,and further used in generating SHAP values fromthesemodels.In the next phase,the dataset is integrated with the dominant feature sets identified through association-rule based analysis.This new integrated dataset is used in re-training the machine learning models.The new SHAP values generated from these models help in validating the contributions of feature sets in predicting malignancy.The conventional machine learning models lack interpretability,which can hinder their integration into clinical decision-making systems.In this study,the SHAP values are introduced along with association-rule based feature integration as a comprehensive framework for understanding the contributions of feature sets inmodelling the predictions.The study discusses the importance of reliable predictive models for early diagnosis of thyroid cancer,and a validation framework of explainability.The proposed model shows an accuracy of 93.48%.Performance metrics such as precision,recall,F1-score,and the area under the receiver operating characteristic(AUROC)are also higher than the baseline models.The results of the proposed model help us identify the dominant feature sets that impact thyroid cancer classification and prediction.The features{calcification}and{shape}consistently emerged as the top-ranked features associated with thyroid malignancy,in both association-rule based interestingnessmetric values and SHAPmethods.The paper highlights the potential of the rule-based integrated models with SHAP in bridging the gap between the machine learning predictions and the interpretability of this prediction which is required for real-world medical applications. 展开更多
关键词 Explainable AI machine learning clinical decision support systems thyroid cancer association-rule based framework SHAP values classification and prediction
暂未订购
Bridging AI and explainability in civil engineering: the Yin‑Yang of predictive power and interpretability
7
作者 Monjurul Hasan Ming Lu 《AI in Civil Engineering》 2025年第1期424-441,共18页
Civil engineering relies on data from experiments or simulations to calibrate models that approximate systembehaviors. This paper examines machine learning (ML) algorithms for AI-driven decision support in civil engin... Civil engineering relies on data from experiments or simulations to calibrate models that approximate systembehaviors. This paper examines machine learning (ML) algorithms for AI-driven decision support in civil engineering,specifically construction engineering and management, where complex input–output relationships demandboth predictive accuracy and interpretability. Explainable AI (XAI) is critical for safety and compliance-sensitiveapplications, ensuring transparency in AI decisions. The literature review identifies key XAI evaluation attributes—model type, explainability, perspective, and interpretability and assesses the Enhanced Model Tree (EMT), a novelmethod demonstrating strong potential for civil engineering applications compared to commonly applied MLalgorithms. The study highlights the need to balance AI’s predictive power with XAI’s transparency, akin to the Yin–Yang philosophy: AI advances in efficiency and optimization, while XAI provides logical reasoning behind conclusions.Drawing on insights from the literature, the study proposes a tailored XAI assessment framework addressing civilengineering’s unique needs—problem context, data constraints, and model explainability. By formalizing thissynergy, the research fosters trust in AI systems, enabling safer and more socially responsible outcomes. The findingsunderscore XAI’s role in bridging the gap between complex AI models and end-user accountability, ensuring AI’s fullpotential is realized in the field. 展开更多
关键词 Explainable AI(XAI) AI transparency Causal reasoning Sensitivity analysis Feature relevance AI in construction engineering Data-driven engineering
原文传递
Artificial intelligence propels lung cancer screening:innovations and the challenges of explainability and reproducibility
8
作者 Mario Mascalchi Chiara Marzi Stefano Diciotti 《Signal Transduction and Targeted Therapy》 2025年第2期492-494,共3页
In a recent study published in Nature Medicine,Wang,Shao,and colleagues successfully addressed two critical issues of lung cancer(LC)screening with low-dose computed tomography(LDCT)whose widespread implementation,des... In a recent study published in Nature Medicine,Wang,Shao,and colleagues successfully addressed two critical issues of lung cancer(LC)screening with low-dose computed tomography(LDCT)whose widespread implementation,despite its capacity to decrease LC mortality,remains challenging:(1)the difficulty in accurately distinguishing malignant nodules from the far more common benign nodules detected on LDCT,and(2)the insufficient coverage of LC screening in resource-limited areas.1 To perform nodule risk stratification,Wang et al.developed and validated a multi-step,multidimensional artificial intelligence(AI)-based system(Fig.1)and introduced a data-driven Chinese Lung Nodules Reporting and Data System(C-Lung-RADS).1 A Lung-RADS system was developed in the US to stratify lung nodules into categories of increasing risk of LC and to provide corresponding management recommendations. 展开更多
关键词 SCREENING low dosecomputedtomography explainability REPRODUCIBILITY nodule risk stratificatio noduleriskstratification malignant nodules ARTIFICIALINTELLIGENCE
暂未订购
Explainable AI-based Short-term Voltage Stability Mechanism Analysis:Explainability Measure and Stability-oriented Preventive Control
9
作者 Boyang Shan Alberto Borghetti +1 位作者 Weiye Zheng Qi Guo 《CSEE Journal of Power and Energy Systems》 2025年第6期2673-2683,共11页
As the cornerstone for the safe operation of energy systems,short-term voltage stability(STVS)has been assessed effectively with the advance of artificial intelligence(AI).However,the black-box models of traditional A... As the cornerstone for the safe operation of energy systems,short-term voltage stability(STVS)has been assessed effectively with the advance of artificial intelligence(AI).However,the black-box models of traditional AI barely identify what the specific key factors in power systems are and how they influence STVS,thus providing limited practical information for engineers in on-site dispatch centers.Enlightened by the latest explainable artificial intelligence(XAI)techniques,this paper aims to unveil the mechanism underlying the complex STVS problem.First,the ground truth for STVS is established via qualitative analysis.Based on this,an explainability score is then devised to measure the trustworthiness of different XAI techniques,among which Local Interpretable Model-agnostic Explanations(LIME)exhibits the best performance in this study.Finally,a sequential approach is proposed to extend the local interpretation of LIME to a broader scope,which is applied to enhance STVS performance before a fault occurs in distribution system load shedding,serving as an example to demonstrate the application merits of the explored mechanism.Numerical results on a modified IEEE system demonstrate that this finding facilitates the identification of the most suitable XAI technique for STVS,while also providing an interpretable mechanism for the STVS,offering accessible guidance for stability-aware dispatch. 展开更多
关键词 explainability score explainable artificial intelligence mechanism analysis sequential approach shortterm voltage stability
原文传递
Reliable prediction for TBM energy consumption during tunnel excavation:A novel technique balancing explainability and performance
10
作者 Wenli Liu Yafei Qi Fenghua Liu 《Underground Space》 2025年第3期77-95,共19页
Recently,AI-based models have been applied to accurately estimate tunnel boring machine(TBM)energy consumption.Although data-driven models exhibit strong predictive capabilities,their outputs derived from“black box”... Recently,AI-based models have been applied to accurately estimate tunnel boring machine(TBM)energy consumption.Although data-driven models exhibit strong predictive capabilities,their outputs derived from“black box”processes are challenging to interpret and generalize.Consequently,this study develops an XGB_MOFS model that cooperates extreme gradient boosting(XGBoost)and multi-objective feature selection(MOFS)to improve the accuracy and explainability of energy consumption prediction.The XGB_MOFS model includes:(1)a causal inference framework to identify the causal relationships among influential factors,and(2)a MOFS approach to balance predictive performance and explainability.Two case studies are carried out to verify the proposed method.Results show that XGB_MOFS achieves a high degree of accuracy and robustness in energy consumption prediction.The XGB_MOFS model,balancing accuracy with explainability,serves as an effective and feasible tool for regulating TBM energy consumption. 展开更多
关键词 Machine learning Multi-objective feature selection explainability Energy consumption Shield tunneling
在线阅读 下载PDF
An informer approach to civil aviation hard landing prediction considering learning assurance and explainability
11
作者 Lei Dong Xinqi Peng +1 位作者 Xi Chen Jiachen Liu 《Aerospace Systems》 2025年第4期789-807,共19页
Predicting hard landings is crucial for aiding pilots’decisions and ensuring flight safety.This paper addresses the limitations of current hard landing predictionmodels,specifically in terms of long-term forecasting ... Predicting hard landings is crucial for aiding pilots’decisions and ensuring flight safety.This paper addresses the limitations of current hard landing predictionmodels,specifically in terms of long-term forecasting accuracy and explainability.To overcome these challenges,it introduces the Informer hard landing prediction model,developed using QAR data,and performs an indepth explainability analysis of the model’s output.Following the principles of learning assurance,the data processing and model training phases are standardized.This involves the application of forward-backward filtering and Granger causality testing to refine the QAR data,thus creating a dataset that aligns with essential prediction standards.The Informer model addresses the challenges of multivariate time series discontinuities by localizing its network to enhance data adaptability.During model training and testing,hyperparameters are finely tuned to maximize prediction accuracy and generalizability.To improve transparency,the model employs an attention weight matrix and a feature reset-based explainability method.Tests show that models trained on datasets developed through a defined data management process deliver favorable predictive performance.The localized enhanced network improved prediction accuracy by 23.5%and increased its capacity to learn from discontinuous multivariate time series.Compared to the LSTM network,the Informer network achieved an 18.83%improvement in prediction accuracy and demonstrated superior long-time series prediction capabilities. 展开更多
关键词 Learning assurance Hard landing prediction Data management Neural network explainability
在线阅读 下载PDF
Explainability enhanced liver disease diagnosis technique using tree selection and stacking ensemble-based random forest model
12
作者 Mohammad Mamun Safiul Haque Chowdhury +2 位作者 Muhammad Minoar Hossain M.R.Khatun Sadiq Iqbal 《Informatics and Health》 2025年第1期17-40,共24页
Background:Liver disease(LD)significantly impacts global health,requiring accurate diagnostic methods.This study aims to develop an automated system for LD prediction using machine learning(ML)and explainable artifici... Background:Liver disease(LD)significantly impacts global health,requiring accurate diagnostic methods.This study aims to develop an automated system for LD prediction using machine learning(ML)and explainable artificial intelligence(XAI),enhancing diagnostic precision and interpretability.Methods:This research systematically analyzes two distinct datasets encompassing liver health indicators.A combination of preprocessing techniques,including feature optimization methods such as Forward Feature Selection(FFS),Backward Feature Selection(BFS),and Recursive Feature Elimination(RFE),is applied to enhance data quality.After that,ML models,namely Support Vector Machines(SVM),Naive Bayes(NB),Random Forest(RF),K-nearest neighbors(KNN),Decision Trees(DT),and a novel Tree Selection and Stacking Ensemble-based RF(TSRF),are assessed in the dataset to diagnose LD.Finally,the ultimate model is selected based on incorporating cross-validation and evaluation through performance metrics like accuracy,precision,specificity,etc.,and efficient XAI methods express the ultimate model’s interoperability.Findings:The analysis reveals TSRF as the most effective model,achieving a peak accuracy of 99.92%on Dataset-1 without feature optimization and 88.88%on Dataset-2 with RFE optimization.XAI techniques,including SHAP and LIME plots,highlight key features influencing model predictions,providing insights into the reasoning behind classification outcomes.Interpretation:The findings highlight TSRF’s potential in improving LD diagnosis,using XAI to enhance transparency and trust in ML models.Despite high accuracy and interpretability,limitations such as dataset bias and lack of clinical validation remain.Future work focuses on integrating advanced XAI,diversifying datasets,and applying the approach in clinical settings for reliable diagnostics. 展开更多
关键词 Liver disease DIAGNOSIS Machine learning Explainable artificial intelligence(XAI) Feature optimization
暂未订购
Transforming Healthcare with State-of-the-Art Medical-LLMs:A Comprehensive Evaluation of Current Advances Using Benchmarking Framework
13
作者 Himadri Nath Saha Dipanwita Chakraborty Bhattacharya +5 位作者 Sancharita Dutta Arnab Bera Srutorshi Basuray Satyasaran Changdar Saptarshi Banerjee Jon Turdiev 《Computers, Materials & Continua》 2026年第2期234-289,共56页
The emergence of Medical Large Language Models has significantly transformed healthcare.Medical Large Language Models(Med-LLMs)serve as transformative tools that enhance clinical practice through applications in decis... The emergence of Medical Large Language Models has significantly transformed healthcare.Medical Large Language Models(Med-LLMs)serve as transformative tools that enhance clinical practice through applications in decision support,documentation,and diagnostics.This evaluation examines the performance of leading Med-LLMs,including GPT-4Med,Med-PaLM,MEDITRON,PubMedGPT,and MedAlpaca,across diverse medical datasets.It provides graphical comparisons of their effectiveness in distinct healthcare domains.The study introduces a domain-specific categorization system that aligns these models with optimal applications in clinical decision-making,documentation,drug discovery,research,patient interaction,and public health.The paper addresses deployment challenges of Medical-LLMs,emphasizing trustworthiness and explainability as essential requirements for healthcare AI.It presents current evaluation techniques that improve model transparency in high-stakes medical contexts and analyzes regulatory frameworks using benchmarking datasets such asMedQA,MedMCQA,PubMedQA,and MIMIC.By identifying ongoing challenges in biasmitigation,reliability,and ethical compliance,thiswork serves as a resource for selecting appropriate Med-LLMs and outlines future directions in the field.This analysis offers a roadmap for developing Med-LLMs that balance technological innovation with the trust and transparency required for clinical integration,a perspective often overlooked in existing literature. 展开更多
关键词 Medical large language models(Med-LLM) AI in healthcare natural language processing(NLP)in medicine fine-tuning medical LLMs retrieval-augmented generation(RAG)in medicine multi-modal learning in healthcare explainability and transparency in medical AI FDA regulations for AI in medicine evaluation and benchmarking of medical large language models
在线阅读 下载PDF
A Comprehensive Survey on Trustworthy Graph Neural Networks:Privacy,Robustness,Fairness,and Explainability 被引量:2
14
作者 Enyan Dai Tianxiang Zhao +5 位作者 Huaisheng Zhu Junjie Xu Zhimeng Guo Hui Liu Jiliang Tang Suhang Wang 《Machine Intelligence Research》 EI CSCD 2024年第6期1011-1061,共51页
Graph neural networks(GNNs)have made rapid developments in the recent years.Due to their great ability in modeling graph-structured data,GNNs are vastly used in various applications,including high-stakes scenarios suc... Graph neural networks(GNNs)have made rapid developments in the recent years.Due to their great ability in modeling graph-structured data,GNNs are vastly used in various applications,including high-stakes scenarios such as financial analysis,traffic predictions,and drug discovery.Despite their great potential in benefiting humans in the real world,recent study shows that GNNs can leak private information,are vulnerable to adversarial attacks,can inherit and magnify societal bias from training data and lack inter-pretability,which have risk of causing unintentional harm to the users and society.For example,existing works demonstrate that at-tackers can fool the GNNs to give the outcome they desire with unnoticeable perturbation on training graph.GNNs trained on social networks may embed the discrimination in their decision process,strengthening the undesirable societal bias.Consequently,trust-worthy GNNs in various aspects are emerging to prevent the harm from GNN models and increase the users'trust in GNNs.In this pa-per,we give a comprehensive survey of GNNs in the computational aspects of privacy,robustness,fairness,and explainability.For each aspect,we give the taxonomy of the related methods and formulate the general frameworks for the multiple categories of trustworthy GNNs.We also discuss the future research directions of each aspect and connections between these aspects to help achieve trustworthi-ness. 展开更多
关键词 Graph neural networks(GNNs) TRUSTWORTHY PRIVACY ROBUSTNESS FAIRNESS explainability
原文传递
Explainability and Interpretability in Electric Load Forecasting Using Machine Learning Techniques–A Review 被引量:2
15
作者 Lukas Baur Konstantin Ditschuneit +3 位作者 Maximilian Schambach Can Kaymakci Thomas Wollmann Alexander Sauer 《Energy and AI》 EI 2024年第2期483-496,共14页
Electric Load Forecasting(ELF)is the central instrument for planning and controlling demand response programs,electricity trading,and consumption optimization.Due to the increasing automation of these processes,meanin... Electric Load Forecasting(ELF)is the central instrument for planning and controlling demand response programs,electricity trading,and consumption optimization.Due to the increasing automation of these processes,meaningful and transparent forecasts become more and more important.Still,at the same time,the complexity of the used machine learning models and architectures increases.Because there is an increasing interest in interpretable and explainable load forecasting methods,this work conducts a literature review to present already applied approaches regarding explainability and interpretability for load forecasts using Machine Learning.Based on extensive literature research covering eight publication portals,recurring modeling approaches,trends,and modeling techniques are identified and clustered by properties to achieve more interpretable and explainable load forecasts.The results on interpretability show an increase in the use of probabilistic models,methods for time series decomposition and the use of fuzzy logic in addition to classically interpretable models.Dominant explainable approaches are Feature Importance and Attention mechanisms.The discussion shows that a lot of knowledge from the related field of time series forecasting still needs to be adapted to the problems in ELF.Compared to other applications of explainable and interpretable methods such as clustering,there are currently relatively few research results,but with an increasing trend. 展开更多
关键词 Electric load forecasting explainability InterpretabilityStructured review
在线阅读 下载PDF
Explainability-based Trust Algorithm for electricity price forecasting models 被引量:1
16
作者 Leena Heistrene Ram Machlev +5 位作者 Michael Perl Juri Belikov Dmitry Baimel Kfir Levy Shie Mannor Yoash Levron 《Energy and AI》 2023年第4期141-158,共18页
Advanced machine learning(ML)algorithms have outperformed traditional approaches in various forecasting applications,especially electricity price forecasting(EPF).However,the prediction accuracy of ML reduces substant... Advanced machine learning(ML)algorithms have outperformed traditional approaches in various forecasting applications,especially electricity price forecasting(EPF).However,the prediction accuracy of ML reduces substantially if the input data is not similar to the ones seen by the model during training.This is often observed in EPF problems when market dynamics change owing to a rise in fuel prices,an increase in renewable penetration,a change in operational policies,etc.While the dip in model accuracy for unseen data is a cause for concern,what is more,challenging is not knowing when the ML model would respond in such a manner.Such uncertainty makes the power market participants,like bidding agents and retailers,vulnerable to substantial financial loss caused by the prediction errors of EPF models.Therefore,it becomes essential to identify whether or not the model prediction at a given instance is trustworthy.In this light,this paper proposes a trust algorithm for EPF users based on explainable artificial intelligence techniques.The suggested algorithm generates trust scores that reflect the model’s prediction quality for each new input.These scores are formulated in two stages:in the first stage,the coarse version of the score is formed using correlations of local and global explanations,and in the second stage,the score is fine-tuned further by the Shapley additive explanations values of different features.Such score-based explanations are more straightforward than feature-based visual explanations for EPF users like asset managers and traders.A dataset from Italy’s and ERCOT’s electricity market validates the efficacy of the proposed algorithm.Results show that the algorithm has more than 85%accuracy in identifying good predictions when the data distribution is similar to the training dataset.In the case of distribution shift,the algorithm shows the same accuracy level in identifying bad predictions. 展开更多
关键词 Electricity price forecasting EPF Explainable AI model XAI SHAP explainability
在线阅读 下载PDF
Explainable Hybrid AI Model for DDoS Detection in SDN-Enabled Internet of Vehicle
17
作者 Oumaima Saidani Nazia Azim +5 位作者 Ateeq Ur Rehman Akbayan Bekarystankyzy Hala Abdel Hameed Mostafa Mohamed R.Abonazel Ehab Ebrahim Mohamed Ebrahim Sarah Abu Ghazalah 《Computers, Materials & Continua》 2026年第5期499-526,共28页
The convergence of Software Defined Networking(SDN)in Internet of Vehicles(IoV)enables a flexible,programmable,and globally visible network control architecture across Road Side Units(RSUs),cloud servers,and automobil... The convergence of Software Defined Networking(SDN)in Internet of Vehicles(IoV)enables a flexible,programmable,and globally visible network control architecture across Road Side Units(RSUs),cloud servers,and automobiles.While this integration enhances scalability and safety,it also raises sophisticated cyberthreats,particularly Distributed Denial of Service(DDoS)attacks.Traditional rule-based anomaly detection methods often struggle to detectmodern low-and-slowDDoS patterns,thereby leading to higher false positives.To this end,this study proposes an explainable hybrid framework to detect DDoS attacks in SDN-enabled IoV(SDN-IoV).The hybrid framework utilizes a Residual Network(ResNet)to capture spatial correlations and a Bi-Long Short-Term Memory(BiLSTM)to capture both forward and backward temporal dependencies in high-dimensional input patterns.To ensure transparency and trustworthiness,themodel integrates the Explainable AI(XAI)technique,i.e.,SHapley Additive exPlanations(SHAP).SHAP highlights the contribution of each feature during the decision-making process,facilitating security analysts to understand the rationale behind the attack classification decision.The SDN-IoV environment is created in Mininet-WiFi and SUMO,and the hybrid model is trained on the CICDDoS2019 security dataset.The simulation results reveal the efficacy of the proposed model in terms of standard performance metrics compared to similar baseline methods. 展开更多
关键词 Explainable AI software defined networking Internet of vehicles DDoS attack ResNet BiLSTM
在线阅读 下载PDF
The Transparency Revolution in Geohazard Science:A Systematic Review and Research Roadmap for Explainable Artificial Intelligence
18
作者 Moein Tosan Vahid Nourani +5 位作者 Ozgur Kisi Yongqiang Zhang Sameh A.Kantoush Mekonnen Gebremichael Ruhollah Taghizadeh-Mehrjardi Jinhui Jeanne Huang 《Computer Modeling in Engineering & Sciences》 2026年第1期77-117,共41页
The integration of machine learning(ML)into geohazard assessment has successfully instigated a paradigm shift,leading to the production of models that possess a level of predictive accuracy previously considered unatt... The integration of machine learning(ML)into geohazard assessment has successfully instigated a paradigm shift,leading to the production of models that possess a level of predictive accuracy previously considered unattainable.However,the black-box nature of these systems presents a significant barrier,hindering their operational adoption,regulatory approval,and full scientific validation.This paper provides a systematic review and synthesis of the emerging field of explainable artificial intelligence(XAI)as applied to geohazard science(GeoXAI),a domain that aims to resolve the long-standing trade-off between model performance and interpretability.A rigorous synthesis of 87 foundational studies is used to map the intellectual and methodological contours of this rapidly expanding field.The analysis reveals that current research efforts are concentrated predominantly on landslide and flood assessment.Methodologically,tree-based ensembles and deep learning models dominate the literature,with SHapley Additive exPlanations(SHAP)frequently adopted as the principal post-hoc explanation technique.More importantly,the review further documents how the role of XAI has shifted:rather than being used solely as a tool for interpreting models after training,it is increasingly integrated into the modeling cycle itself.Recent applications include its use in feature selection,adaptive sampling strategies,and model evaluation.The evidence also shows that GeoXAI extends beyond producing feature rankings.It reveals nonlinear thresholds and interaction effects that generate deeper mechanistic insights into hazard processes and mechanisms.Nevertheless,several key challenges remain unresolved within the field.These persistent issues are especially pronounced when considering the crucial necessity for interpretation stability,the demanding scholarly task of reliably distinguishing correlation from causation,and the development of appropriate methods for the treatment of complex spatio-temporal dynamics. 展开更多
关键词 Explainable artificial intelligence(XAI) geohazard assessment machine learning SHAP trustworthy AI model interpretability
在线阅读 下载PDF
Residual-based neural network for unmodeled distortions in 2D coordinate transformation
19
作者 Vinicius Francisco Rofatto Luiz Felipe Rodrigues de Almeida +3 位作者 Marcelo Tomio Matsuoka Ivandro Klein Mauricio Roberto Veronez Luiz Gonzaga Da Silveira Junior 《Geodesy and Geodynamics》 2026年第1期104-119,共16页
Coordinate transformation models often fail to account for nonlinear and spatially dependent distortions,leading to significant residual errors in geospatial applications.Here,we propose a residual-based neural correc... Coordinate transformation models often fail to account for nonlinear and spatially dependent distortions,leading to significant residual errors in geospatial applications.Here,we propose a residual-based neural correction(RBNC)strategy,in which a neural network learns to model only the systematic distortions left by an initial geometric transformation.By focusing solely on residual patterns,RBNC reduces model complexity and improves performance,particularly in scenarios with sparse or structured control point configurations.We evaluate the method using both simulated datasets(with varying distortion intensities and sampling strategies)and real-world image georeferencing tasks.Compared with direct neural network coordinate converters and classical transformation models,RBNC delivers more accurate and stable results under challenging conditions,while maintaining comparable performance in ideal cases.These findings demonstrate the effectiveness of residual modelling as a light-weight and robust alternative for improving coordinate transformation accuracy. 展开更多
关键词 Artificial intelligence Machine learning MODELLING Nonlinear systems Model selection Explainable AI
原文传递
Graph-Based Intrusion Detection with Explainable Edge Classification Learning
20
作者 Jaeho Shin Jaekwang Kim 《Computers, Materials & Continua》 2026年第1期610-635,共26页
Network attacks have become a critical issue in the internet security domain.Artificial intelligence technology-based detection methodologies have attracted attention;however,recent studies have struggled to adapt to ... Network attacks have become a critical issue in the internet security domain.Artificial intelligence technology-based detection methodologies have attracted attention;however,recent studies have struggled to adapt to changing attack patterns and complex network environments.In addition,it is difficult to explain the detection results logically using artificial intelligence.We propose a method for classifying network attacks using graph models to explain the detection results.First,we reconstruct the network packet data into a graphical structure.We then use a graph model to predict network attacks using edge classification.To explain the prediction results,we observed numerical changes by randomly masking and calculating the importance of neighbors,allowing us to extract significant subgraphs.Our experiments on six public datasets demonstrate superior performance with an average F1-score of 0.960 and accuracy of 0.964,outperforming traditional machine learning and other graph models.The visual representation of the extracted subgraphs highlights the neighboring nodes that have the greatest impact on the results,thus explaining detection.In conclusion,this study demonstrates that graph-based models are suitable for network attack detection in complex environments,and the importance of graph neighbors can be calculated to efficiently analyze the results.This approach can contribute to real-world network security analyses and provide a new direction in the field. 展开更多
关键词 Intrusion detection graph neural network explainable AI network attacks GraphSAGE
在线阅读 下载PDF
上一页 1 2 28 下一页 到第
使用帮助 返回顶部