期刊文献+
共找到1,155篇文章
< 1 2 58 >
每页显示 20 50 100
Machine learning accelerated catalysts design for CO reduction:An interpretability and transferability analysis
1
作者 Yuhang Wang Yaqin Zhang +4 位作者 Ninggui Ma Jun Zhao Yu Xiong Shuang Luo Jun Fan 《Journal of Materials Science & Technology》 2025年第10期14-23,共10页
Developing machine learning frameworks with predictive power,interpretability,and transferability is crucial,yet it faces challenges in the field of electrocatalysis.To achieve this,we employed rigorous feature engine... Developing machine learning frameworks with predictive power,interpretability,and transferability is crucial,yet it faces challenges in the field of electrocatalysis.To achieve this,we employed rigorous feature engineering to establish a finely tuned gradient boosting regressor(GBR)model,which adeptly captures the physical complexity from feature space to target variables.We demonstrated that environmental electron effects and atomic number significantly govern the success of the mapping process via global and local explanations.The finely tuned GBR model exhibits exceptional robustness in predicting CO adsorption energies(R_(ave)^(2)=0.937,RMSE=0.153 eV).Moreover,the model demonstrated remarkable transfer learning ability,showing excellent predictive power for OH,NO,and N_(2) adsorption.Importantly,the GBR model exhibits exceptional predictive capability across an extensive search space,thereby demonstrating profound adaptability and versatility.Our research framework significantly enhances the interpretability and transferability of machine learning in electrocatalysis,offering vital insights for further advancements. 展开更多
关键词 Machine learning First-principles calculation interpretability Transferability CO reduction
原文传递
An AI-Enabled Framework for Transparency and Interpretability in Cardiovascular Disease Risk Prediction
2
作者 Isha Kiran Shahzad Ali +3 位作者 Sajawal ur Rehman Khan Musaed Alhussein Sheraz Aslam Khursheed Aurangzeb 《Computers, Materials & Continua》 2025年第3期5057-5078,共22页
Cardiovascular disease(CVD)remains a leading global health challenge due to its high mortality rate and the complexity of early diagnosis,driven by risk factors such as hypertension,high cholesterol,and irregular puls... Cardiovascular disease(CVD)remains a leading global health challenge due to its high mortality rate and the complexity of early diagnosis,driven by risk factors such as hypertension,high cholesterol,and irregular pulse rates.Traditional diagnostic methods often struggle with the nuanced interplay of these risk factors,making early detection difficult.In this research,we propose a novel artificial intelligence-enabled(AI-enabled)framework for CVD risk prediction that integrates machine learning(ML)with eXplainable AI(XAI)to provide both high-accuracy predictions and transparent,interpretable insights.Compared to existing studies that typically focus on either optimizing ML performance or using XAI separately for local or global explanations,our approach uniquely combines both local and global interpretability using Local Interpretable Model-Agnostic Explanations(LIME)and SHapley Additive exPlanations(SHAP).This dual integration enhances the interpretability of the model and facilitates clinicians to comprehensively understand not just what the model predicts but also why those predictions are made by identifying the contribution of different risk factors,which is crucial for transparent and informed decision-making in healthcare.The framework uses ML techniques such as K-nearest neighbors(KNN),gradient boosting,random forest,and decision tree,trained on a cardiovascular dataset.Additionally,the integration of LIME and SHAP provides patient-specific insights alongside global trends,ensuring that clinicians receive comprehensive and actionable information.Our experimental results achieve 98%accuracy with the Random Forest model,with precision,recall,and F1-scores of 97%,98%,and 98%,respectively.The innovative combination of SHAP and LIME sets a new benchmark in CVD prediction by integrating advanced ML accuracy with robust interpretability,fills a critical gap in existing approaches.This framework paves the way for more explainable and transparent decision-making in healthcare,ensuring that the model is not only accurate but also trustworthy and actionable for clinicians. 展开更多
关键词 Artificial Intelligence cardiovascular disease(CVD) explainability eXplainable AI(XAI) interpretability LIME machine learning(ML) SHAP
在线阅读 下载PDF
Artificial intelligence high-throughput prediction building dataset to enhance the interpretability of hybrid halide perovskite bandgap
3
作者 Wenning Chen Jungchul Yun +6 位作者 Doyun Im Sijia Li Kelvian T.Mularso Jihun Nam Bonghyun Jo Sangwook Lee Hyun Suk Jung 《Journal of Energy Chemistry》 2025年第10期649-661,共13页
The bandgap is a key parameter for understanding and designing hybrid perovskite material properties,as well as developing photovoltaic devices.Traditional bandgap calculation methods like ultravioletvisible spectrosc... The bandgap is a key parameter for understanding and designing hybrid perovskite material properties,as well as developing photovoltaic devices.Traditional bandgap calculation methods like ultravioletvisible spectroscopy and first-principles calculations are time-and power-consuming,not to mention capturing bandgap change mechanisms for hybrid perovskite materials across a wide range of unknown space.In the present work,an artificial intelligence ensemble comprising two classifiers(with F1 scores of 0.9125 and 0.925)and a regressor(with mean squared error of 0.0014 eV)is constructed to achieve high-precision prediction of the bandgap.The bandgap perovskite dataset is established through highthroughput prediction of bandgaps by the ensemble.Based on the self-built dataset,partial dependence analysis(PDA)is developed to interpret the bandgap influential mechanism.Meanwhile,an interpretable mathematical model with an R^(2)of 0.8417 is generated using the genetic programming symbolic regression(GPSR)technique.The constructed PDA maps agree well with the Shapley Additive exPlanations,the GPSR model,and experiment verification.Through PDA,we reveal the boundary effect,the bowing effect,and their evolution trends with key descriptors. 展开更多
关键词 Artificial intelligence HIGH-THROUGHPUT Perovskite bandgap Partial dependence analysis Model interpretability
在线阅读 下载PDF
Predicting soil desiccation cracking behavior using machine learning and interpretability analysis
4
作者 Ting Wang Chao-Sheng Tang +6 位作者 Zhixiong Zeng Jin-Jian Xu Rui Wang Qing Cheng Zhengtao Shen She-Feng Hao Yong-Xiang Yu 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第9期6020-6032,共13页
Soil desiccation cracking is ubiquitous in nature and has significantpotential impacts on the engineering geological properties of soils.Previous studies have extensively examined various factors affecting soil cracki... Soil desiccation cracking is ubiquitous in nature and has significantpotential impacts on the engineering geological properties of soils.Previous studies have extensively examined various factors affecting soil cracking behavior through a numerous small-sample experiments.However,experimental studies alone cannot accurately describe soil cracking behavior.In this study,we firstly propose a modeling framework for predicting the surface crack ratio of soil desiccation cracking based on machine learning and interpretable analysis.The framework utilizes 1040 sets of soil cracking experimental data and employs random forest(RF),extreme gradient boosting(XGBoost),and artificialneural network(ANN)models to predict the surface crack ratio of soil desiccation cracking.To clarify the influenceof input features on soil cracking behavior,feature importance and Shapley additive explanations(SHAP)are applied for interpretability analysis.The results reveal that ensemble methods(RF and XGBoost)provide better predictive performance than the deep learning model(ANN).The feature importance analysis shows that soil desiccation cracking is primarily influencedby initial water content,plasticity index,finalwater content,liquid limit,sand content,clay content and thickness.Moreover,SHAP-based interpretability analysis further explores how soil cracking responds to various input variables.This study provides new insight into the evolution of soil cracking behavior,enhancing the understanding of its physical mechanisms and facilitating the assessment of potential regional development of soil desiccation cracking. 展开更多
关键词 Soil desiccation cracking Surface crack ratio Machine learning model Shapley additive explanations interpretability analysis
在线阅读 下载PDF
Multi-objective optimization framework in the modeling of belief rule-based systems with interpretability-accuracy trade-off
5
作者 YOU Yaqian SUN Jianbin +1 位作者 TAN Yuejin JIANG Jiang 《Journal of Systems Engineering and Electronics》 2025年第2期423-435,共13页
The belief rule-based(BRB)system has been popular in complexity system modeling due to its good interpretability.However,the current mainstream optimization methods of the BRB systems only focus on modeling accuracy b... The belief rule-based(BRB)system has been popular in complexity system modeling due to its good interpretability.However,the current mainstream optimization methods of the BRB systems only focus on modeling accuracy but ignore the interpretability.The single-objective optimization strategy has been applied in the interpretability-accuracy trade-off by inte-grating accuracy and interpretability into an optimization objec-tive.But the integration has a greater impact on optimization results with strong subjectivity.Thus,a multi-objective optimiza-tion framework in the modeling of BRB systems with inter-pretability-accuracy trade-off is proposed in this paper.Firstly,complexity and accuracy are taken as two independent opti-mization goals,and uniformity as a constraint to give the mathe-matical description.Secondly,a classical multi-objective opti-mization algorithm,nondominated sorting genetic algorithm II(NSGA-II),is utilized as an optimization tool to give a set of BRB systems with different accuracy and complexity.Finally,a pipeline leakage detection case is studied to verify the feasibility and effectiveness of the developed multi-objective optimization.The comparison illustrates that the proposed multi-objective optimization framework can effectively avoid the subjectivity of single-objective optimization,and has capability of joint optimiz-ing the structure and parameters of BRB systems with inter-pretability-accuracy trade-off. 展开更多
关键词 belief rule-based(BRB)systems interpretability multi-objective optimization nondominated sorting genetic algo-rithm II(NSGA-II) pipeline leakage detection.
在线阅读 下载PDF
Towards trustworthy multi-modal motion prediction:Holistic evaluation and interpretability of outputs
6
作者 Sandra Carrasco Limeros Sylwia Majchrowska +3 位作者 Joakim Johnander Christoffer Petersson MiguelÁngel Sotelo David Fernández Llorca 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第3期557-572,共16页
Predicting the motion of other road agents enables autonomous vehicles to perform safe and efficient path planning.This task is very complex,as the behaviour of road agents depends on many factors and the number of po... Predicting the motion of other road agents enables autonomous vehicles to perform safe and efficient path planning.This task is very complex,as the behaviour of road agents depends on many factors and the number of possible future trajectories can be consid-erable(multi-modal).Most prior approaches proposed to address multi-modal motion prediction are based on complex machine learning systems that have limited interpret-ability.Moreover,the metrics used in current benchmarks do not evaluate all aspects of the problem,such as the diversity and admissibility of the output.The authors aim to advance towards the design of trustworthy motion prediction systems,based on some of the re-quirements for the design of Trustworthy Artificial Intelligence.The focus is on evaluation criteria,robustness,and interpretability of outputs.First,the evaluation metrics are comprehensively analysed,the main gaps of current benchmarks are identified,and a new holistic evaluation framework is proposed.Then,a method for the assessment of spatial and temporal robustness is introduced by simulating noise in the perception system.To enhance the interpretability of the outputs and generate more balanced results in the proposed evaluation framework,an intent prediction layer that can be attached to multi-modal motion prediction models is proposed.The effectiveness of this approach is assessed through a survey that explores different elements in the visualisation of the multi-modal trajectories and intentions.The proposed approach and findings make a significant contribution to the development of trustworthy motion prediction systems for autono-mous vehicles,advancing the field towards greater safety and reliability. 展开更多
关键词 autonomous vehicles EVALUATION interpretability multi-modal motion prediction ROBUSTNESS trustworthy AI
在线阅读 下载PDF
THAPE: A Tunable Hybrid Associative Predictive Engine Approach for Enhancing Rule Interpretability in Association Rule Learning for the Retail Sector
7
作者 Monerah Alawadh Ahmed Barnawi 《Computers, Materials & Continua》 SCIE EI 2024年第6期4995-5015,共21页
Association rule learning(ARL)is a widely used technique for discovering relationships within datasets.However,it often generates excessive irrelevant or ambiguous rules.Therefore,post-processing is crucial not only f... Association rule learning(ARL)is a widely used technique for discovering relationships within datasets.However,it often generates excessive irrelevant or ambiguous rules.Therefore,post-processing is crucial not only for removing irrelevant or redundant rules but also for uncovering hidden associations that impact other factors.Recently,several post-processing methods have been proposed,each with its own strengths and weaknesses.In this paper,we propose THAPE(Tunable Hybrid Associative Predictive Engine),which combines descriptive and predictive techniques.By leveraging both techniques,our aim is to enhance the quality of analyzing generated rules.This includes removing irrelevant or redundant rules,uncovering interesting and useful rules,exploring hidden association rules that may affect other factors,and providing backtracking ability for a given product.The proposed approach offers a tailored method that suits specific goals for retailers,enabling them to gain a better understanding of customer behavior based on factual transactions in the target market.We applied THAPE to a real dataset as a case study in this paper to demonstrate its effectiveness.Through this application,we successfully mined a concise set of highly interesting and useful association rules.Out of the 11,265 rules generated,we identified 125 rules that are particularly relevant to the business context.These identified rules significantly improve the interpretability and usefulness of association rules for decision-making purposes. 展开更多
关键词 Association rule learning POST-PROCESSING PREDICTIVE machine learning rule interpretability
在线阅读 下载PDF
An interpretability model for syndrome differentiation of HBV-ACLF in traditional Chinese medicine using small-sample imbalanced data
8
作者 ZHOU Zhan PENG Qinghua +3 位作者 XIAO Xiaoxia ZOU Beiji LIU Bin GUO Shuixia 《Digital Chinese Medicine》 CAS CSCD 2024年第2期137-147,共11页
Objective Clinical medical record data associated with hepatitis B-related acute-on-chronic liver failure(HBV-ACLF)generally have small sample sizes and a class imbalance.However,most machine learning models are desig... Objective Clinical medical record data associated with hepatitis B-related acute-on-chronic liver failure(HBV-ACLF)generally have small sample sizes and a class imbalance.However,most machine learning models are designed based on balanced data and lack interpretability.This study aimed to propose a traditional Chinese medicine(TCM)diagnostic model for HBV-ACLF based on the TCM syndrome differentiation and treatment theory,which is clinically interpretable and highly accurate.Methods We collected medical records from 261 patients diagnosed with HBV-ACLF,including three syndromes:Yang jaundice(214 cases),Yang-Yin jaundice(41 cases),and Yin jaundice(6 cases).To avoid overfitting of the machine learning model,we excluded the cases of Yin jaundice.After data standardization and cleaning,we obtained 255 relevant medical records of Yang jaundice and Yang-Yin jaundice.To address the class imbalance issue,we employed the oversampling method and five machine learning methods,including logistic regression(LR),support vector machine(SVM),decision tree(DT),random forest(RF),and extreme gradient boosting(XGBoost)to construct the syndrome diagnosis models.This study used precision,F1 score,the area under the receiver operating characteristic(ROC)curve(AUC),and accuracy as model evaluation metrics.The model with the best classification performance was selected to extract the diagnostic rule,and its clinical significance was thoroughly analyzed.Furthermore,we proposed a novel multiple-round stable rule extraction(MRSRE)method to obtain a stable rule set of features that can exhibit the model’s clinical interpretability.Results The precision of the five machine learning models built using oversampled balanced data exceeded 0.90.Among these models,the accuracy of RF classification of syndrome types was 0.92,and the mean F1 scores of the two categories of Yang jaundice and Yang-Yin jaundice were 0.93 and 0.94,respectively.Additionally,the AUC was 0.98.The extraction rules of the RF syndrome differentiation model based on the MRSRE method revealed that the common features of Yang jaundice and Yang-Yin jaundice were wiry pulse,yellowing of the urine,skin,and eyes,normal tongue body,healthy sublingual vessel,nausea,oil loathing,and poor appetite.The main features of Yang jaundice were a red tongue body and thickened sublingual vessels,whereas those of Yang-Yin jaundice were a dark tongue body,pale white tongue body,white tongue coating,lack of strength,slippery pulse,light red tongue body,slimy tongue coating,and abdominal distension.This is aligned with the classifications made by TCM experts based on TCM syndrome differentiation and treatment theory.Conclusion Our model can be utilized for differentiating HBV-ACLF syndromes,which has the potential to be applied to generate other clinically interpretable models with high accuracy on clinical data characterized by small sample sizes and a class imbalance. 展开更多
关键词 Traditional Chinese medicine(TCM) Hepatitis B-related acute-on-chronic liver failure(HBV-ACLF) Imbalanced data Random forest(RF) interpretability
暂未订购
Deep radio signal clustering with interpretability analysis based on saliency map
9
作者 Huaji Zhou Jing Bai +3 位作者 Yiran Wang Junjie Ren Xiaoniu Yang Licheng Jiao 《Digital Communications and Networks》 CSCD 2024年第5期1448-1458,共11页
With the development of information technology,radio communication technology has made rapid progress.Many radio signals that have appeared in space are difficult to classify without manually labeling.Unsupervised rad... With the development of information technology,radio communication technology has made rapid progress.Many radio signals that have appeared in space are difficult to classify without manually labeling.Unsupervised radio signal clustering methods have recently become an urgent need for this situation.Meanwhile,the high complexity of deep learning makes it difficult to understand the decision results of the clustering models,making it essential to conduct interpretable analysis.This paper proposed a combined loss function for unsupervised clustering based on autoencoder.The combined loss function includes reconstruction loss and deep clustering loss.Deep clustering loss is added based on reconstruction loss,which makes similar deep features converge more in feature space.In addition,a features visualization method for signal clustering was proposed to analyze the interpretability of autoencoder utilizing Saliency Map.Extensive experiments have been conducted on a modulated signal dataset,and the results indicate the superior performance of our proposed method over other clustering algorithms.In particular,for the simulated dataset containing six modulation modes,when the SNR is 20dB,the clustering accuracy of the proposed method is greater than 78%.The interpretability analysis of the clustering model was performed to visualize the significant features of different modulated signals and verified the high separability of the features extracted by clustering model. 展开更多
关键词 Unsupervised radio signal clustering Autoencoder Clustering features visualization Deep learning interpretability
在线阅读 下载PDF
A Novel Belief Rule-Based Fault Diagnosis Method with Interpretability 被引量:1
10
作者 Zhijie Zhou Zhichao Ming +4 位作者 Jie Wang Shuaiwen Tang You Cao Xiaoxia Han Gang Xiang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第8期1165-1185,共21页
Fault diagnosis plays an irreplaceable role in the normal operation of equipment.A fault diagnosis model is often required to be interpretable for increasing the trust between humans and the model.Due to the understan... Fault diagnosis plays an irreplaceable role in the normal operation of equipment.A fault diagnosis model is often required to be interpretable for increasing the trust between humans and the model.Due to the understandable knowledge expression and transparent reasoning process,the belief rule base(BRB)has extensive applications as an interpretable expert system in fault diagnosis.Optimization is an effective means to weaken the subjectivity of experts in BRB,where the interpretability of BRB may be weakened.Hence,to obtain a credible result,the weakening factors of interpretability in the BRB-based fault diagnosis model are firstly analyzed,which are manifested in deviation from the initial judgement of experts and over-optimization of parameters.For these two factors,three indexes are proposed,namely the consistency index of rules,consistency index of the rule base and over-optimization index,tomeasure the interpretability of the optimizedmodel.Considering both the accuracy and interpretability of amodel,an improved coordinate ascent(I-CA)algorithmis proposed to fine-tune the parameters of the fault diagnosis model based on BRB.In I-CA,the algorithm combined with the advance and retreat method and the golden section method is employed to be one-dimensional search algorithm.Furthermore,the random optimization sequence and adaptive step size are proposed to improve the accuracy of the model.Finally,a case study of fault diagnosis in aerospace relays based on BRB is carried out to verify the effectiveness of the proposed method. 展开更多
关键词 Fault diagnosis belief rule base interpretability weakening factors improved coordinate ascent
在线阅读 下载PDF
New Trend in Fintech: Research on Artificial Intelligence Model Interpretability in Financial Fields 被引量:1
11
作者 Han Yan Sheng Lin 《Open Journal of Applied Sciences》 2019年第10期761-773,共13页
With the development of Fintech, applying artificial intelligence (AI) technologies to the financial field is a general trend. However, there are some inappropriate conditions, for instance, the AI model is always tre... With the development of Fintech, applying artificial intelligence (AI) technologies to the financial field is a general trend. However, there are some inappropriate conditions, for instance, the AI model is always treated as a black box and cannot be interpreted. This paper studies the AI model interpretability when the models are applied in the financial field. We analyze the reasons of black box problem and explore the effective solutions. We propose a new kind of automatic Regtech tool—LIMER, and put forward policy suggestions, thereby continuously promoting the development of Fintech to a higher level. 展开更多
关键词 Fintech Regtech AI MODEL interpretability LIMER
在线阅读 下载PDF
RMA-CNN:A Residual Mixed Domain Attention CNN for Bearings Fault Diagnosis and Its Time-Frequency Domain Interpretability 被引量:3
12
作者 Dandan Peng Huan Wang +1 位作者 Wim Desmet Konstantinos Gryllias 《Journal of Dynamics, Monitoring and Diagnostics》 2023年第2期115-132,共18页
Early fault diagnosis of bearings is crucial for ensuring safe and reliable operations.Convolutional neural networks(CNNs)have achieved significant breakthroughs in machinery fault diagnosis.However,complex and varyin... Early fault diagnosis of bearings is crucial for ensuring safe and reliable operations.Convolutional neural networks(CNNs)have achieved significant breakthroughs in machinery fault diagnosis.However,complex and varying working conditions can lead to inter-class similarity and intra-class variability in datasets,making it more challenging for CNNs to learn discriminative features.Furthermore,CNNs are often considered“black boxes”and lack sufficient interpretability in the fault diagnosis field.To address these issues,this paper introduces a residual mixed domain attention CNN method,referred to as RMA-CNN.This method comprises multiple residual mixed domain attention modules(RMAMs),each employing one attention mechanism to emphasize meaningful features in both time and channel domains.This significantly enhances the network’s ability to learn fault-related features.Moreover,we conduct an in-depth analysis of the inherent feature learning mechanism of the attention module RMAM to improve the interpretability of CNNs in fault diagnosis applications.Experiments conducted on two datasets—a high-speed aeronautical bearing dataset and a motor bearing dataset—demonstrate that the RMA-CNN achieves remarkable results in diagnostic tasks. 展开更多
关键词 attention interpretability CNN fault diagnosis rolling element bearings
在线阅读 下载PDF
Improving the Interpretability and Reliability of Regional Land Cover Classification by U-Net Using Remote Sensing Data
13
作者 WANG Xinshuang CAO Jiancheng +4 位作者 LIU Jiange LI Xiangwu WANG Lu ZUO Feihang BAI Mu 《Chinese Geographical Science》 SCIE CSCD 2022年第6期979-994,共16页
The accurate and reliable interpretation of regional land cover data is very important for natural resource monitoring and environmental assessment.At present,refined land cover data are mainly obtained by manual visu... The accurate and reliable interpretation of regional land cover data is very important for natural resource monitoring and environmental assessment.At present,refined land cover data are mainly obtained by manual visual interpretation,which has the problems of heavy workload and inconsistent interpretation scales.Deep learning has greatly improved the automatic processing and analysis of remote sensing data.However,the accurate interpretation of feature information from massive datasets remains a difficult problem in wide regional land cover classification.To improve the efficiency of deep learning-based remote sensing image interpretation,we selected multisource remote sensing data,assessed the interpretability of the U-Net model based on surface spatial scenes with different levels of complexity,and proposed a new method of stereoscopic accuracy verification(SAV)to evaluate the reliability of the classification result.The results show that classification accuracy is more highly correlated with terrain and landscape than with other factors related to image data,such as platform and spatial resolution.As the complexity of surface spatial scenes increases,the accuracy of the classification results mainly shows a fluctuating declining trend.We also find the distribution characteristics from the SAV evaluation results of different land cover types in each surface spatial scene.Based on the results observed in this study,we consider the distinction of interpretability and reliability in diverse ground object types and design targeted classification strategies for different surface scenes,which can greatly improve the classification efficiency.The key achievement of this study is to provide the theoretical basis for remote sensing information analysis and an accuracy evaluation method for regional land cover classification,and the proposed method can help improve the likelihood that intelligent interpretation can replace manual acquisition. 展开更多
关键词 land cover classification stereoscopic accuracy verification U-Net remote sensing interpretability RELIABILITY
在线阅读 下载PDF
A New Prediction System Based on Self-Growth Belief Rule Base with Interpretability Constraints
14
作者 Yingmei Li Peng Han +3 位作者 Wei He Guangling Zhang Hongwei Wei Boying Zhao 《Computers, Materials & Continua》 SCIE EI 2023年第5期3761-3780,共20页
Prediction systems are an important aspect of intelligent decisions.In engineering practice,the complex system structure and the external environment cause many uncertain factors in the model,which influence the model... Prediction systems are an important aspect of intelligent decisions.In engineering practice,the complex system structure and the external environment cause many uncertain factors in the model,which influence the modeling accuracy of the model.The belief rule base(BRB)can implement nonlinear modeling and express a variety of uncertain information,including fuzziness,ignorance,randomness,etc.However,the BRB system also has two main problems:Firstly,modeling methods based on expert knowledge make it difficult to guarantee the model’s accuracy.Secondly,interpretability is not considered in the optimization process of current research,resulting in the destruction of the interpretability of BRB.To balance the accuracy and interpretability of the model,a self-growth belief rule basewith interpretability constraints(SBRB-I)is proposed.The reasoning process of the SBRB-I model is based on the evidence reasoning(ER)approach.Moreover,the self-growth learning strategy ensures effective cooperation between the datadriven model and the expert system.A case study showed that the accuracy and interpretability of the model could be guaranteed.The SBRB-I model has good application prospects in prediction systems. 展开更多
关键词 Belief rule base evidence reasoning interpretability optimization prediction system
在线阅读 下载PDF
A human-centric perspective on interpretability in large language models
15
作者 Zihan Zhou Minfeng Zhu Wei Chen 《Visual Informatics》 2025年第1期I0002-I0004,共3页
With the rapid advancement of natural language processing(NLP),large language models(LLMs)have demonstrated excep-tional performance across tasks(Xu et al.,2024;Lee et al.,2024;Tan et al.,2023)like machine translation... With the rapid advancement of natural language processing(NLP),large language models(LLMs)have demonstrated excep-tional performance across tasks(Xu et al.,2024;Lee et al.,2024;Tan et al.,2023)like machine translation,text summarization,and question-answering,significantly accelerating NLP research.Furthermore,LLMs have also facilitated advancements across di-verse fields.In robotics,for example,LLMs enhance the interpre-tation and translation of user voice commands,enabling precise planning and execution of robotic arm movements(Driess et al.,2023). 展开更多
关键词 large language models machine translation natural language processing human centric PERFORMANCE natural language processing nlp large language models llms interpretability machine translationtext summarizationand
原文传递
A knowledge graph attention network for the cold-start problem in intelligent manufacturing:Interpretability and accuracy improvement
16
作者 Ziye Zhou Yuqi Zhang +5 位作者 Shuize Wang David San Martin Yongqian Liu Yang Liu Chenchong Wang Wei Xu 《Materials Genome Engineering Advances》 2025年第2期24-36,共13页
In the rolling production of steel,predicting the performance of new products is challenging due to the low variety of data distributions resulting from standardized manufacturing processes and fixed product categorie... In the rolling production of steel,predicting the performance of new products is challenging due to the low variety of data distributions resulting from standardized manufacturing processes and fixed product categories.This scenario poses a significant hurdle for machine learning models,leading to what is commonly known as the“cold-start problem”.To address this issue,we propose a knowledge graph attention neural network for steel manufacturing(SteelKGAT).By leveraging expert knowledge and a multi-head attention mechanism,SteelKGAT aims to enhance prediction accuracy.Our experimental results demonstrate that the SteelKGAT model outperforms existing methods when generalizing to previously unseen products.Only the SteelKGAT model accurately captures the feature trend,thereby offering correct guidance in product tuning,which is of practical significance for new product development(NPD).Additionally,we employ the Integrated Gradients(IG)method to shed light on the model's predictions,revealing the relative importance of each feature within the knowledge graph.Notably,this work represents the first application of knowledge graph attention neural networks to address the cold-start problem in steel rolling production.By combining domain expertise and interpretable predictions,our knowledge-informed SteelKGAT model provides accurate insights into the mechanical properties of products even in cold-start scenarios. 展开更多
关键词 attention mechanisms cold-start problem graph neural network interpretable machine learning knowledge graph materials design mechanical performance
在线阅读 下载PDF
Mapping soil organic carbon in fragmented agricultural landscapes:The efficacy and interpretability of multi-category remote sensing variables
17
作者 Yujiao Wei Yiyun Chen +6 位作者 Jiaxue Wang Peiheng Yu Lu Xu Chi Zhang Huanfeng Shen Yaolin Liu Ganlin Zhang 《Journal of Integrative Agriculture》 2025年第11期4395-4414,共20页
Accurately mapping the spatial distribution of soil organic carbon(SOC)is crucial for guiding agricultural management and improving soil carbon sequestration,especially in fragmented agricultural landscapes.Although r... Accurately mapping the spatial distribution of soil organic carbon(SOC)is crucial for guiding agricultural management and improving soil carbon sequestration,especially in fragmented agricultural landscapes.Although remote sensing provides spatially continuous environmental information about heterogeneous agricultural landscapes,its relationship with SOC remains unclear.In this study,we hypothesized that multi-category remote sensing-derived variables can enhance our understanding of SOC variation within complex landscape conditions.Taking the Qilu Lake watershed in Yunnan,China,as a case study area and based on 216 topsoil samples collected from irrigation areas,we applied the extreme gradient boosting(XGBoost)model to investigate the contributions of vegetation indices(VI),brightness indices(BI),moisture indices(MI),and spectral transformations(ST,principal component analysis and tasseled cap transformation)to SOC mapping.The results showed that ST contributed the most to SOC prediction accuracy,followed by MI,VI,and BI,with improvements in R2 of 29.27,26.83,19.51,and 14.43%,respectively.The dominance of ST can be attributed to the fact that it contains richer remote sensing spectral information.The optimal SOC prediction model integrated soil properties,topographic factors,location factors,and landscape metrics,as well as remote sensing-derived variables,and achieved RMSE and MAE of 15.05 and 11.42 g kg-1,and R2 and CCC of 0.57 and 0.72,respectively.The Shapley additive explanations deciphered the nonlinear and threshold effects that exist between soil moisture,vegetation status,soil brightness and SOC.Compared with traditional linear regression models,interpretable machine learning has advantages in prediction accuracy and revealing the influences of variables that reflect landscape characteristics on SOC.Overall,this study not only reveals how remote sensing-derived variables contribute to our understanding of SOC distribution in fragmented agricultural landscapes but also clarifies their efficacy.Through interpretable machine learning,we can further elucidate the causes of SOC variation,which is important for sustainable soil management and agricultural practices. 展开更多
关键词 soil organic carbon remote sensing-derived variables Shapley additive explanations efficacy and interpretability fragmented agricultural landscapes
在线阅读 下载PDF
A deep learning-based global tropical cyclogenesis prediction model and its interpretability analysis
18
作者 Bin MU Xin WANG +4 位作者 Shijin YUAN Yuxuan CHEN Guansong WANG Bo QIN Guanbo ZHOU 《Science China Earth Sciences》 SCIE EI CAS CSCD 2024年第12期3671-3695,共25页
Tropical cloud clusters(TCCs)can potentially develop into tropical cyclones(TCs),leading to significant casualties and economic losses.Accurate prediction of tropical cyclogenesis(TCG)is crucial for early warnings.Mos... Tropical cloud clusters(TCCs)can potentially develop into tropical cyclones(TCs),leading to significant casualties and economic losses.Accurate prediction of tropical cyclogenesis(TCG)is crucial for early warnings.Most traditional deep learning methods applied to TCG prediction rely on predictors from a single time point,neglect the ocean-atmosphere interactions,and exhibit low model interpretability.This study proposes the Tropical Cyclogenesis Prediction-Net(TCGP-Net)based on the Swin Transformer,which leverages convolutional operations and attention mechanisms to encode spatiotemporal features and capture the temporal evolution of predictors.This model incorporates the coupled ocean-atmosphere interactions,including multiple variables such as sea surface temperature.Additionally,causal inference and integrated gradients are employed to validate the effectiveness of the predictors and provide an interpretability analysis of the model's decision-making process.The model is trained using GridSat satellite data and ERA5 reanalysis datasets.Experimental results demonstrate that TCGP-Net achieves high accuracy and stability,with a detection rate of 97.9%and a false alarm rate of 2.2%for predicting TCG 24 hours in advance,significantly outperforming existing models.This indicates that TCGP-Net is a reliable tool for tropical cyclogenesis prediction. 展开更多
关键词 Tropical cyclogenesis prediction Deep learning Feature fusion interpretability Causal inference
原文传递
Interpretability of Neural Networks Based on Game-theoretic Interactions
19
作者 Huilin Zhou Jie Ren +3 位作者 Huiqi Deng Xu Cheng Jinpeng Zhang Quanshi Zhang 《Machine Intelligence Research》 EI CSCD 2024年第4期718-739,共22页
This paper introduces the system of game-theoretic interactions,which connects both the explanation of knowledge encoded in a deep neural networks(DNN)and the explanation of the representation power of a DNN.In this s... This paper introduces the system of game-theoretic interactions,which connects both the explanation of knowledge encoded in a deep neural networks(DNN)and the explanation of the representation power of a DNN.In this system,we define two gametheoretic interaction indexes,namely the multi-order interaction and the multivariate interaction.More crucially,we use these interaction indexes to explain feature representations encoded in a DNN from the following four aspects:(1)Quantifying knowledge concepts encoded by a DNN;(2)Exploring how a DNN encodes visual concepts,and extracting prototypical concepts encoded in the DNN;(3)Learning optimal baseline values for the Shapley value,and providing a unified perspective to compare fourteen different attribution methods;(4)Theoretically explaining the representation bottleneck of DNNs.Furthermore,we prove the relationship between the interaction encoded in a DNN and the representation power of a DNN(e.g.,generalization power,adversarial transferability,and adversarial robustness).In this way,game-theoretic interactions successfully bridge the gap between“the explanation of knowledge concepts encoded in a DNN”and"the explanation of the representation capacity of a DNN"as a unified explanation. 展开更多
关键词 Model interpretability and transparency explainable AI game theory INTERACTION deep learning.
原文传递
Autism Spectrum Disorder Classification with Interpretability in Children Based on Structural MRI Features Extracted Using Contrastive Variational Autoencoder
20
作者 Ruimin Ma Ruitao Xie +5 位作者 Yanlin Wang Jintao Meng Yanjie Wei Yunpeng Cai Wenhui Xi Yi Pan 《Big Data Mining and Analytics》 EI CSCD 2024年第3期781-793,共13页
Autism Spectrum Disorder(ASD)is a highly disabling mental disease that brings significant impairments of social interaction ability to the patients,making early screening and intervention of ASD critical.With the deve... Autism Spectrum Disorder(ASD)is a highly disabling mental disease that brings significant impairments of social interaction ability to the patients,making early screening and intervention of ASD critical.With the development of the machine learning and neuroimaging technology,extensive research has been conducted on machine classification of ASD based on structural Magnetic Resonance Imaging(s-MRI).However,most studies involve with datasets where participants'age are above 5 and lack interpretability.In this paper,we propose a machine learning method for ASD classification in children with age range from 0.92 to 4.83 years,based on s-MRI features extracted using Contrastive Variational AutoEncoder(CVAE).78 s-MRIs,collected from Shenzhen Children's Hospital,are used for training CVAE,which consists of both ASD-specific feature channel and common-shared feature channel.The ASD participants represented by ASD-specific features can be easily discriminated from Typical Control(TC)participants represented by the common-shared features.In case of degraded predictive accuracy when data size is extremely small,a transfer learning strategy is proposed here as a potential solution.Finally,we conduct neuroanatomical interpretation based on the correlation between s-MRI features extracted from CVAE and surface area of different cortical regions,which discloses potential biomarkers that could help target treatments of ASD in the future. 展开更多
关键词 Autism Spectrum Disorder(ASD)classification Contrastive Variational AutoEncoder(CVAE) transfer learning neuroanatomical interpretation
原文传递
上一页 1 2 58 下一页 到第
使用帮助 返回顶部