期刊文献+
共找到11篇文章
< 1 >
每页显示 20 50 100
信用风险评估结果的可解释性——基于Model-Agnostic方法论视角
1
作者 刘佳明 于镓宁 《系统工程》 北大核心 2025年第5期146-158,共13页
信用风险评估模型是金融风险管理领域的重要工具,已有信用风险评估问题大多基于机器学习建模,并以提升预测效果为主要目标。但大多数机器学习模型存在解释能力弱的缺点,降低了决策者对预测结果的信任度,从而限制了机器学习模型在信用风... 信用风险评估模型是金融风险管理领域的重要工具,已有信用风险评估问题大多基于机器学习建模,并以提升预测效果为主要目标。但大多数机器学习模型存在解释能力弱的缺点,降低了决策者对预测结果的信任度,从而限制了机器学习模型在信用风险评估领域中的应用。为了提高机器学习模型预测结果的可解释性,本文使用XGBoost预测信用风险,并从Model-Agnostic方法论视角,结合部分依赖图和SHAP值对预测结果进行解释。对真实的信用数据的实证研究结果表明:总贷款量、贷款利率、总还款量、总偿还利息和近期还款量对预测信用风险起到关键作用。其中,总贷款量、总还款量、总利息和近期还款量与信用风险状态呈现出线性依赖关系,而贷款利率与信用风险状态表现出复杂的非线性依赖关系;数据特征之间互相不独立,在预测模型中存在交互影响作用;特征对不同个体预测贡献度存在明显差异,总体表现为总还款量、近期还款量与总贷款量之间的不同分布所导致的不同信用状态。 展开更多
关键词 信用风险 预测 可解释性机器学习 model-agnostic
原文传递
A Model-Agnostic Hierarchical Framework Towards Trajectory Prediction
2
作者 Tang-Wen Qian Yuan Wang +4 位作者 Yong-Jun Xu Zhao Zhang Lin Wu Qiang Qiu Fei Wang 《Journal of Computer Science & Technology》 2025年第2期322-339,共18页
Predicting the future trajectories of multiple agents is essential for various applications in real life,such as surveillance systems,autonomous driving,and social robots.The trajectory prediction task is influenced b... Predicting the future trajectories of multiple agents is essential for various applications in real life,such as surveillance systems,autonomous driving,and social robots.The trajectory prediction task is influenced by many factors,including the individual historical trajectory,interactions between agents,and the fuzzy nature of the observed agents’motion.While existing methods have made great progress on the topic of trajectory prediction,they treat all the information uniformly,which limits the effectiveness of information utilization.To this end,in this paper,we propose and utilize a model-agnostic framework to regard all the information in a two-level hierarchical view.Particularly,the first-level view is the inter-trajectory view.In this level,we observe that the difficulty in predicting different trajectory samples varies.We define trajectory difficulty and train the proposed framework in an“easy-to-hard”schema.The second-level view is the intra-trajectory level.We find the influencing factors for a particular trajectory can be divided into two parts.The first part is global features,which keep stable within a trajectory,i.e.,the expected destination.The second part is local features,which change over time,i.e.,the current position.We believe that the two types of information should be handled in different ways.The hierarchical view is beneficial to take full advantage of the information in a fine-grained way.Experimental results validate the effectiveness of the proposed model-agnostic framework. 展开更多
关键词 spatial-temporal data mining trajectory prediction hierarchical framework model-agnostic
原文传递
利用T5和MAML的多语种英语翻译质量改进研究
3
作者 梅玲 孙红萍 《鄂州大学学报》 2025年第2期98-102,105,共6页
为改善翻译效果与质量,结合T5 (Text-To-Text Transfer Transformer)和MAML (Model-Agnostic Meta-Learning),对其在多语种英语翻译中的质量可持续改进与应用进行了研究。采用自回归学习方法,对T5模型预训练参数进行微调,构建一个生成... 为改善翻译效果与质量,结合T5 (Text-To-Text Transfer Transformer)和MAML (Model-Agnostic Meta-Learning),对其在多语种英语翻译中的质量可持续改进与应用进行了研究。采用自回归学习方法,对T5模型预训练参数进行微调,构建一个生成式的多语种英语翻译模型,结合MAML框架,在多个任务上进行训练,使模型在少量的新任务数据上实现快速自适应,利用网络爬虫,构建多语种平行语料库,并以BLEU(Bilingual Evaluation Understudy)及TER(Translation Error Rate)为指标,对基于T5和MAML的翻译模型进行了质量评估。实验结果显示,相较于OpenNMT (Open Neural Machine Translation)、Transformer以及Opus-MT(Open Parallel Corpus-Machine Translation)基线模型,该文模型BLEU Score均值分别高出6.05%、2.59%以及2.05%。结论表明,T5-MAML模型能够有效改进多语种英语翻译质量,实现更自然流畅的翻译输出。 展开更多
关键词 多语言英语翻译 Text-To-Text Transfer Transformer model-agnostic Meta-Learning 英语翻译模型
在线阅读 下载PDF
Enhanced Wheat Disease Detection Using Deep Learning and Explainable AI Techniques
4
作者 Hussam Qushtom Ahmad Hasasneh Sari Masri 《Computers, Materials & Continua》 2025年第7期1379-1395,共17页
This study presents an enhanced convolutional neural network(CNN)model integrated with Explainable Artificial Intelligence(XAI)techniques for accurate prediction and interpretation of wheat crop diseases.The aim is to... This study presents an enhanced convolutional neural network(CNN)model integrated with Explainable Artificial Intelligence(XAI)techniques for accurate prediction and interpretation of wheat crop diseases.The aim is to streamline the detection process while offering transparent insights into the model’s decision-making to support effective disease management.To evaluate the model,a dataset was collected from wheat fields in Kotli,Azad Kashmir,Pakistan,and tested across multiple data splits.The proposed model demonstrates improved stability,faster conver-gence,and higher classification accuracy.The results show significant improvements in prediction accuracy and stability compared to prior works,achieving up to 100%accuracy in certain configurations.In addition,XAI methods such as Local Interpretable Model-agnostic Explanations(LIME)and Shapley Additive Explanations(SHAP)were employed to explain the model’s predictions,highlighting the most influential features contributing to classification decisions.The combined use of CNN and XAI offers a dual benefit:strong predictive performance and clear interpretability of outcomes,which is especially critical in real-world agricultural applications.These findings underscore the potential of integrating deep learning models with XAI to advance automated plant disease detection.The study offers a precise,reliable,and interpretable solution for improving wheat production and promoting agricultural sustainability.Future extensions of this work may include scaling the dataset across broader regions and incorporating additional modalities such as environmental data to enhance model robustness and generalization. 展开更多
关键词 Convolutional neural network(CNN) wheat crop disease deep learning disease detection shapley additive explanations(SHAP) local interpretable model-agnostic explanations(LIME)
在线阅读 下载PDF
Two-Stage Approach for Targeted Knowledge Transfer in Self-Knowledge Distillation
5
作者 Zimo Yin Jian Pu +1 位作者 Yijie Zhou Xiangyang Xue 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第11期2270-2283,共14页
Knowledge distillation(KD) enhances student network generalization by transferring dark knowledge from a complex teacher network. To optimize computational expenditure and memory utilization, self-knowledge distillati... Knowledge distillation(KD) enhances student network generalization by transferring dark knowledge from a complex teacher network. To optimize computational expenditure and memory utilization, self-knowledge distillation(SKD) extracts dark knowledge from the model itself rather than an external teacher network. However, previous SKD methods performed distillation indiscriminately on full datasets, overlooking the analysis of representative samples. In this work, we present a novel two-stage approach to providing targeted knowledge on specific samples, named two-stage approach self-knowledge distillation(TOAST). We first soften the hard targets using class medoids generated based on logit vectors per class. Then, we iteratively distill the under-trained data with past predictions of half the batch size. The two-stage knowledge is linearly combined, efficiently enhancing model performance. Extensive experiments conducted on five backbone architectures show our method is model-agnostic and achieves the best generalization performance.Besides, TOAST is strongly compatible with existing augmentation-based regularization methods. Our method also obtains a speedup of up to 2.95x compared with a recent state-of-the-art method. 展开更多
关键词 Cluster-based regularization iterative prediction refinement model-agnostic framework self-knowledge distillation(SKD) two-stage knowledge transfer
在线阅读 下载PDF
Convolutional neural network based data interpretable framework for Alzheimer’s treatment planning
6
作者 Sazia Parvin Sonia Farhana Nimmy Md Sarwar Kamal 《Visual Computing for Industry,Biomedicine,and Art》 2024年第1期375-386,共12页
Alzheimer’s disease(AD)is a neurological disorder that predominantly affects the brain.In the coming years,it is expected to spread rapidly,with limited progress in diagnostic techniques.Various machine learning(ML)a... Alzheimer’s disease(AD)is a neurological disorder that predominantly affects the brain.In the coming years,it is expected to spread rapidly,with limited progress in diagnostic techniques.Various machine learning(ML)and artificial intelligence(AI)algorithms have been employed to detect AD using single-modality data.However,recent developments in ML have enabled the application of these methods to multiple data sources and input modalities for AD prediction.In this study,we developed a framework that utilizes multimodal data(tabular data,magnetic resonance imaging(MRI)images,and genetic information)to classify AD.As part of the pre-processing phase,we generated a knowledge graph from the tabular data and MRI images.We employed graph neural networks for knowledge graph creation,and region-based convolutional neural network approach for image-to-knowledge graph generation.Additionally,we integrated various explainable AI(XAI)techniques to interpret and elucidate the prediction outcomes derived from multimodal data.Layer-wise relevance propagation was used to explain the layer-wise outcomes in the MRI images.We also incorporated submodular pick local interpretable model-agnostic explanations to interpret the decision-making process based on the tabular data provided.Genetic expression values play a crucial role in AD analysis.We used a graphical gene tree to identify genes associated with the disease.Moreover,a dashboard was designed to display XAI outcomes,enabling experts and medical professionals to easily comprehend the predic-tion results. 展开更多
关键词 Multimodal Region-based convolutional neural network Layer-wise relevance propagation Submodular pick local interpretable model-agnostic explanations Graphical genes tree Alzheimer’s disease
暂未订购
Detecting Deepfake Images Using Deep Learning Techniques and Explainable AI Methods
7
作者 Wahidul Hasan Abir Faria Rahman Khanam +5 位作者 Kazi Nabiul Alam Myriam Hadjouni Hela Elmannai Sami Bourouis Rajesh Dey Mohammad Monirujjaman Khan 《Intelligent Automation & Soft Computing》 SCIE 2023年第2期2151-2169,共19页
Nowadays,deepfake is wreaking havoc on society.Deepfake content is created with the help of artificial intelligence and machine learning to replace one person’s likeness with another person in pictures or recorded vid... Nowadays,deepfake is wreaking havoc on society.Deepfake content is created with the help of artificial intelligence and machine learning to replace one person’s likeness with another person in pictures or recorded videos.Although visual media manipulations are not new,the introduction of deepfakes has marked a breakthrough in creating fake media and information.These manipulated pic-tures and videos will undoubtedly have an enormous societal impact.Deepfake uses the latest technology like Artificial Intelligence(AI),Machine Learning(ML),and Deep Learning(DL)to construct automated methods for creating fake content that is becoming increasingly difficult to detect with the human eye.Therefore,automated solutions employed by DL can be an efficient approach for detecting deepfake.Though the“black-box”nature of the DL system allows for robust predictions,they cannot be completely trustworthy.Explainability is thefirst step toward achieving transparency,but the existing incapacity of DL to explain its own decisions to human users limits the efficacy of these systems.Though Explainable Artificial Intelligence(XAI)can solve this problem by inter-preting the predictions of these systems.This work proposes to provide a compre-hensive study of deepfake detection using the DL method and analyze the result of the most effective algorithm with Local Interpretable Model-Agnostic Explana-tions(LIME)to assure its validity and reliability.This study identifies real and deepfake images using different Convolutional Neural Network(CNN)models to get the best accuracy.It also explains which part of the image caused the model to make a specific classification using the LIME algorithm.To apply the CNN model,the dataset is taken from Kaggle,which includes 70 k real images from the Flickr dataset collected by Nvidia and 70 k fake faces generated by StyleGAN of 256 px in size.For experimental results,Jupyter notebook,TensorFlow,Num-Py,and Pandas were used as software,InceptionResnetV2,DenseNet201,Incep-tionV3,and ResNet152V2 were used as CNN models.All these models’performances were good enough,such as InceptionV3 gained 99.68%accuracy,ResNet152V2 got an accuracy of 99.19%,and DenseNet201 performed with 99.81%accuracy.However,InceptionResNetV2 achieved the highest accuracy of 99.87%,which was verified later with the LIME algorithm for XAI,where the proposed method performed the best.The obtained results and dependability demonstrate its preference for detecting deepfake images effectively. 展开更多
关键词 Deepfake deep learning explainable artificial intelligence(XAI) convolutional neural network(CNN) local interpretable model-agnostic explanations(LIME)
在线阅读 下载PDF
Explainable prediction of loan default based on machine learning models
8
作者 Xu Zhu Qingyong Chu +2 位作者 Xinchang Song Ping Hu Lu Peng 《Data Science and Management》 2023年第3期123-133,共11页
Owing to the convenience of online loans,an increasing number of people are borrowing money on online platforms.With the emergence of machine learning technology,predicting loan defaults has become a popular topic.How... Owing to the convenience of online loans,an increasing number of people are borrowing money on online platforms.With the emergence of machine learning technology,predicting loan defaults has become a popular topic.However,machine learning models have a black-box problem that cannot be disregarded.To make the prediction model rules more understandable and thereby increase the user’s faith in the model,an explanatory model must be used.Logistic regression,decision tree,XGBoost,and LightGBM models are employed to predict a loan default.The prediction results show that LightGBM and XGBoost outperform logistic regression and decision tree models in terms of the predictive ability.The area under curve for LightGBM is 0.7213.The accuracies of LightGBM and XGBoost exceed 0.8.The precisions of LightGBM and XGBoost exceed 0.55.Simultaneously,we employed the local interpretable model-agnostic explanations approach to undertake an explainable analysis of the prediction findings.The results show that factors such as the loan term,loan grade,credit rating,and loan amount affect the predicted outcomes. 展开更多
关键词 Explainable prediction Machine learning Loan default Local interpretable model-agnostic explanations
在线阅读 下载PDF
Multiobjective optimization of dielectric,thermal,and mechanical properties of inorganic glasses utilizing explainable machine learning and genetic algorithm
9
作者 Jincheng Qin Faqiang Zhang +2 位作者 Mingsheng Ma Yongxiang Li Zhifu Liu 《Materials Genome Engineering Advances》 2025年第2期133-145,共13页
To meet the demands of advanced electronic devices,inorganic glasses are required to have comprehensive dielectric,thermal,and mechanical properties.However,the complex composition–property relationship and vast comp... To meet the demands of advanced electronic devices,inorganic glasses are required to have comprehensive dielectric,thermal,and mechanical properties.However,the complex composition–property relationship and vast compositional diversity hinder optimization.This study developed machine learning models to predict permittivity,dielectric loss,thermal conductivity,coefficient of thermal expansion,and Young’s modulus based on the composition features of inorganic glasses.The optimal models achieve R^(2)values of 0.9614,0.7411,0.9454,0.9684,and 0.8164,respectively.By integrating domain knowledge with model-agnostic interpretation methods,feature contributions and interactions were analyzed.The mixed alkali effect is crucial for property regulation,especially Na-K for dielectric loss and Na-Li for thermal conductivity.Boron anomaly shifts the high-λregion to a balanced composition of alkali metals with rising B%.The multiobjective optimization of properties was realized using a genetic algorithm framework.After 23 iterations,the optimal material in the MgO-Al_(2)O_(3)-B_(2)O_(3)-SiO2 system exhibitsε_(r)=4.78,tanδ=0.00063,λ=2.59 W/(m⋅K),α=50.27�10−7K−1,and E=82.41 GPa,outperforming all materials in the dataset.The computational effort was reduced to 1/19 of that required using exhaustive search methods.This study provides a model interpretation framework and an effective multiobjective optimization strategy for glass design. 展开更多
关键词 genetic algorithm inorganic glass machine learning model-agnostic interpretation multiobjective optimization
在线阅读 下载PDF
Interpretable artificial intelligence approach for understanding shear strength in stabilized clay soils using real field soil samples
10
作者 Mohamed Noureldin Aghyad Al Kabbani +1 位作者 Alejandra Lopez Leena Korkiala-Tanttu 《Frontiers of Structural and Civil Engineering》 2025年第5期760-781,共22页
Deep mixing,also known as deep stabilization,is a widely used ground improvement method in Nordic countries,particularly in urban and infrastructural projects,aiming to enhance the properties of soft,sensitive clays.U... Deep mixing,also known as deep stabilization,is a widely used ground improvement method in Nordic countries,particularly in urban and infrastructural projects,aiming to enhance the properties of soft,sensitive clays.Understanding the shear strength of stabilized soils and identifying key influencing factors are essential for ensuring the structural stability and durability of engineering structures.This study introduces a novel explainable artificial intelligence framework to investigate critical soil properties affecting shear strength,utilizing a data set derived from stabilization tests conducted on laboratory samples from the 1990s.The proposed framework investigates the statistical variability and distribution of crucial parameters affecting shear strength within the collected data set.Subsequently,machine learning models are trained and tested to predict soil shear strength based on input features such as water/binder ratio and water content,etc.Global model analysis using feature importance and Shapley additive explanations is conducted to understand the influence of soil input features on shear strength.Further exploration is carried out using partial dependence plots,individual conditional expectation plots,and accumulated local effects to uncover the degree of dependency and important thresholds between key stabilized soil parameters and shear strength.Heat map and feature interaction analysis techniques are then utilized to investigate soil properties interactions and correlations.Lastly,a more specific investigation is conducted on particular soil samples to highlight the most influential soil properties locally,employing the local interpretable model-agnostic explanations technique.The validation of the framework involves analyzing laboratory test results obtained from uniaxial compression tests.The framework demonstrates an ability to predict the shear strength of stabilized soil samples with an accuracy surpassing 90%.Importantly,the explainability results underscore the substantial impact of water content and the water/binder ratio on shear strength. 展开更多
关键词 explainable artificial intelligence Shapley additive explanations local interpretable model-agnostic explanations partial dependence plots stabilized soil water/binder ratio water content shear strength
暂未订购
Examining the characteristics between time and distance gaps of secondary crashes
11
作者 Xinyuan Liu Jinjun Tang +2 位作者 Chen Yuan Fan Gao Xizhi Ding 《Transportation Safety and Environment》 EI 2024年第1期116-131,共16页
Understanding the characteristics of time and distance gaps between the primary(PC)and secondary crashes(SC)is crucial for preventing SC ccurrences and improving road safety.Although previous studies have tried to ana... Understanding the characteristics of time and distance gaps between the primary(PC)and secondary crashes(SC)is crucial for preventing SC ccurrences and improving road safety.Although previous studies have tried to analyse the variation of gaps,there is limited evidence in quantifying the relationships between different gaps and various influential factors.This study proposed a two-layer stacking framework to discuss the time and distance gaps.Specifically,the framework took random forests(RF),gradient boosting decision tree(GBDT)and eXtreme gradient boosting as the base classifiers in the first layer and applied logistic regression(LR)as a combiner in the second layer.On this basis,the local interpretable model-agnostic explanations(LIME)technology was used to interpret the output of the stacking model from both local and global perspectives.Through SC dentification and feature selection,346 SCs and 22 crash-related factors were collected from California interstate freeways.The results showed that the stacking model outperformed base models evaluated by accuracy,precision,and recall indicators.The explanations based on LIME suggest that collision type,distance,speed and volume are the critical features that affect the time and distance gaps.Higher volume can prolong queue length and increase the distance gap from the SCs to PCs.And collision types,peak periods,workday,truck involved and tow away likely induce a long-distance gap.Conversely,there is a shorter distance gap when secondary roads run in the same direction and are close to the primary roads.Lower speed is a significant factor resulting in a long-time gap,while the higher speed is correlated with a short-time gap.These results are expected to provide insights into how contributory features affect the time and distance gaps and help decision-makers develop accurate decisions to prevent SCs. 展开更多
关键词 secondary crash(SC) time and distance gaps stacking framework local interpretable model-agnostic explanations(LIME)
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部