期刊文献+
共找到535篇文章
< 1 2 27 >
每页显示 20 50 100
Explainable AI for predicting the strength of bio-cemented sands
1
作者 Waleed El-Sekelly Muhammad Nouman Amjad Raja Tarek Abdoun 《Journal of Rock Mechanics and Geotechnical Engineering》 2026年第2期1552-1569,共18页
The biological stabilization of soil using microbially induced carbonate precipitation(MICP)employs ureolytic bacteria to precipitate calcium carbonate(CaCO3),which binds soil particles,enhancing strength,stiffness,an... The biological stabilization of soil using microbially induced carbonate precipitation(MICP)employs ureolytic bacteria to precipitate calcium carbonate(CaCO3),which binds soil particles,enhancing strength,stiffness,and erosion resistance.The unconfinedcompressive strength(UCS),a key measure of soil strength,is critical in geotechnical engineering as it directly reflectsthe mechanical stability of treated soils.This study integrates explainable artificialintelligence(XAI)with geotechnical insights to model the UCS of MICP-treated sands.Using 517 experimental data points and a combination of various input variables—including median grain size(D50),coefficientof uniformity(Cu),void ratio(e),urea concentration(Mu),calcium concentration(Mc),optical density(OD)of bacterial solution,pH,and total injection volume(Vt)—fivemachine learning(ML)models,including eXtreme gradient boosting(XGBoost),Light gradient boosting machine(LightGBM),random forest(RF),gene expression programming(GEP),and multivariate adaptive regression splines(MARS),were developed and optimized.The ensemble models(XGBoost,LightGBM,and RF)were optimized using the Chernobyl disaster optimizer(CDO),a recently developed metaheuristic algorithm.Of these,LightGBM-CDO achieved the highest accuracy for UCS prediction.XAI techniques like feature importance analysis(FIA),SHapley additive exPlanations(SHAP),and partial dependence plots(PDPs)were also used to investigate the complex non-linear relationships between the input and output variables.The results obtained have demonstrated that the XAI-driven models can enhance the predictive accuracy and interpretability of MICP processes,offering a sustainable pathway for optimizing geotechnical applications. 展开更多
关键词 Microbially induced carbonate precipitation(MICP) Bio-cementation Unconfined compressive strength(UCS) explainable artificialintelligence(XAI) Optimization
在线阅读 下载PDF
An explainable deep learning approach to enhance the prediction of shield tunnel deviation
2
作者 Jiajie Zhen Fengwen Lai +4 位作者 Ming Huang Junjie Zheng Jim S.Shiau Ping Wang Jinhuo Zheng 《Journal of Rock Mechanics and Geotechnical Engineering》 2026年第1期566-579,共14页
Although machine learning models have achieved high enough accuracy in predicting shield position deviations,their“black box”nature makes the prediction mechanisms and decision-making processes opaque,leading to wea... Although machine learning models have achieved high enough accuracy in predicting shield position deviations,their“black box”nature makes the prediction mechanisms and decision-making processes opaque,leading to weaker explanations and practicability.This study introduces a novel explainable deep learning framework comprising the Informer model with enhanced attention mechanisms(EAMInfor)and deep learning important features(DeepLIFT),aimed at improving the prediction accuracy of shield position deviations and providing interpretability for predictive results.The EAMInfor model attempts to integrate channel attention,spatial attention,and simple attention modules to improve the Informer model's performance.The framework is tested with the four different geological conditions datasets generated from the Xiamen metro line 3,China.Results show that the EAMInfor model outperforms the traditional Informer and comparison models.The analysis with the DeepLIFT method indicates that the push thrust of push cylinder and the earth chamber pressure are the most significant features,while the stroke length of the push cylinder demonstrated lower importance.Furthermore,the variation trends in the significance of data points within input sequences exhibit substantial differences between single and composite strata.This framework not only improves predictive accuracy but also strengthens the credibility and reliability of the results. 展开更多
关键词 Shield tunnel position deviation Machine learning explainable AI Deep learning important features
在线阅读 下载PDF
The 3D-Geoformer for ENSO studies:a Transformer-based model with integrated gradient methods for enhanced explainability 被引量:2
3
作者 Lu ZHOU Rong-Hua ZHANG 《Journal of Oceanology and Limnology》 2025年第6期1688-1708,共21页
Deep learning(DL)has become a crucial technique for predicting the El Niño-Southern Oscillation(ENSO)and evaluating its predictability.While various DL-based models have been developed for ENSO predictions,many f... Deep learning(DL)has become a crucial technique for predicting the El Niño-Southern Oscillation(ENSO)and evaluating its predictability.While various DL-based models have been developed for ENSO predictions,many fail to capture the coherent multivariate evolution within the coupled ocean-atmosphere system of the tropical Pacific.To address this three-dimensional(3D)limitation and represent ENSO-related ocean-atmosphere interactions more accurately,a novel this 3D multivariate prediction model was proposed based on a Transformer architecture,which incorporates a spatiotemporal self-attention mechanism.This model,named 3D-Geoformer,offers several advantages,enabling accurate ENSO predictions up to one and a half years in advance.Furthermore,an integrated gradient method was introduced into the model to identify the sources of predictability for sea surface temperature(SST)variability in the eastern equatorial Pacific.Results reveal that the 3D-Geoformer effectively captures ENSO-related precursors during the evolution of ENSO events,particularly the thermocline feedback processes and ocean temperature anomaly pathways on and off the equator.By extending DL-based ENSO predictions from one-dimensional Niño time series to 3D multivariate fields,the 3D-Geoformer represents a significant advancement in ENSO prediction.This study provides details in the model formulation,analysis procedures,sensitivity experiments,and illustrative examples,offering practical guidance for the application of the model in ENSO research. 展开更多
关键词 Transformer model 3 D-Geoformer El Niño-Southern Oscillation(ENSO)prediction explainable artificial intelligence(XAI) integrated gradient method
在线阅读 下载PDF
Explainable artificial intelligence and ensemble learning for hepatocellular carcinoma classification:State of the art,performance,and clinical implications
4
作者 Sami Akbulut Cemil Colak 《World Journal of Hepatology》 2025年第11期11-25,共15页
Hepatocellular carcinoma(HCC)remains a leading cause of cancer-related mortality globally,necessitating advanced diagnostic tools to improve early detection and personalized targeted therapy.This review synthesizes ev... Hepatocellular carcinoma(HCC)remains a leading cause of cancer-related mortality globally,necessitating advanced diagnostic tools to improve early detection and personalized targeted therapy.This review synthesizes evidence on explainable ensemble learning approaches for HCC classification,emphasizing their integration with clinical workflows and multi-omics data.A systematic analysis[including datasets such as The Cancer Genome Atlas,Gene Expression Omnibus,and the Surveillance,Epidemiology,and End Results(SEER)datasets]revealed that explainable ensemble learning models achieve high diagnostic accuracy by combining clinical features,serum biomarkers such as alpha-fetoprotein,imaging features such as computed tomography and magnetic resonance imaging,and genomic data.For instance,SHapley Additive exPlanations(SHAP)-based random forests trained on NCBI GSE14520 microarray data(n=445)achieved 96.53%accuracy,while stacking ensembles applied to the SEER program data(n=1897)demonstrated an area under the receiver operating characteristic curve of 0.779 for mortality prediction.Despite promising results,challenges persist,including the computational costs of SHAP and local interpretable model-agnostic explanations analyses(e.g.,TreeSHAP requiring distributed computing for metabolomics datasets)and dataset biases(e.g.,SEER’s Western population dominance limiting generalizability).Future research must address inter-cohort heterogeneity,standardize explainability metrics,and prioritize lightweight surrogate models for resource-limited settings.This review presents the potential of explainable ensemble learning frameworks to bridge the gap between predictive accuracy and clinical interpretability,though rigorous validation in independent,multi-center cohorts is critical for real-world deployment. 展开更多
关键词 Hepatocellular carcinoma Artificial intelligence explainable artificial intelligence Ensemble learning explainable ensemble learning
在线阅读 下载PDF
Alternative Lens to Understand the Relationships Between Neighborhood Environment and Well-being with Capability Approach and Explainable Artificial Intelligence
5
作者 JIAO Linshen ZHANG Min +4 位作者 ZHEN Feng QIN Xiao CHEN Peipei ZHANG Shanqi HU Yuchen 《Chinese Geographical Science》 2025年第3期472-491,共20页
The relationship between the neighborhood environment and well-being is attracting increasingly attention from researchers and policymakers,as the goal of development has shift from economy to well-being.However,exist... The relationship between the neighborhood environment and well-being is attracting increasingly attention from researchers and policymakers,as the goal of development has shift from economy to well-being.However,existing literature predominantly adopts the utilitarian approach,understanding well-being as people’s feelings about their lives and viewing the neighborhood environment as resources that benefit well-being.The Capability Approach,a novel approach that conceptualize well-being as the freedoms to do or to be and regard environment as conversion factors that influence well-being,can offer new lens by incorporating human development in-to these topics.This paper proposes an alternative theoretical framework:well-being is conceptualized and measured by capability;neighborhood environment affects well-being by providing spatial services,functioning as environmental conversion factors,and serving as social conversion factors.We conducted a case study of Changshu City located in eastern China,utilizing multiple resource data,applying explainable artificial intelligence(XAI),namely eXtreme Gradient Boosting(XGBoost)and SHapley Additive exPlana-tions(SHAP).Our findings highlight the significance of viewing the neighborhood environment as a set of conversion factors,as it provides more explanatory power than providing spatial services.Compared to conventional research based on linear relationship as-sumption,our results demonstrate that the effects of neighborhood environment on well-being are non-linear,characterized by threshold effects and interaction effects.These insights are crucial for informing urban planning and public policy.This research enriches our un-derstanding of well-being,neighborhood environment,and their relationship as well as provides empirical evidence for the core concept of conversion factors in the capability approach. 展开更多
关键词 WELL-BEING neighborhood environment capability approach non-linear relationship explainable artificial intelligence(XAI)
在线阅读 下载PDF
Evaluating the affecting factors of glacier mass balance in Tanggula Mountains using explainable machine learning and the open global glacier model
6
作者 XU Qiangqiang KANG Shichang +1 位作者 HE Xiaobo XU Min 《Journal of Mountain Science》 2025年第2期466-488,共23页
Glacier mass balance is a key indicator of glacier health and climate change sensitivity.Influencing factors include both climatic and nonclimatic elements,forming a complex set of drivers.There is a lack of quantitat... Glacier mass balance is a key indicator of glacier health and climate change sensitivity.Influencing factors include both climatic and nonclimatic elements,forming a complex set of drivers.There is a lack of quantitative analysis of these composite factors,particularly in climate-typical regions like the Tanggula Mountains on the central Tibetan Plateau.We collected data on various factors affecting glacier mass balance from 2000 to 2020,including climate variables,topographic variables,geometric parameters,and glacier dynamics.We utilized linear regression models,ensemble learning models,and Open Global Glacier Model(OGGM)to analyze glacier mass balance changes in the Tanggula Mountains.Results indicate that linear models explain 58%of the variance in glacier mass balance,with seasonal temperature and precipitation having significant impacts.Our findings show that ensemble learning models made the explanations 5.2%more accurate by including the impact of topographic and geometric factors such as the average glacier height,the slope of the glacier tongue,the speed of the ice flow,and the area of the glacier.Interpretable machine learning identified the spatial distribution of positive and negative impacts of these characteristics and the interaction between glacier topography and ice dynamics.Finally,we predicted the responses of glaciers of different sizes to future climate change based on the results of interpretable machine learning.It was found that relatively large glaciers(>1 km~2)are likely to persist until the end of this century under low emission scenarios,whereas small glaciers(<1 km~2)are expected to nearly disappear by 2080 under any emission scenario.Our research provides technical support for improving glacier change modeling and protection on the Tibetan Plateau. 展开更多
关键词 Glacier mass balance Tanggula Mountains explainable Machine Learning Open Global Glacier Model Climate change
原文传递
Explainable artificial intelligence model for the prediction of undrained shear strength
7
作者 Ho-Hong-Duy Nguyen Thanh-Nhan Nguyen +3 位作者 Thi-Anh-Thu Phan Ngoc-Thi Huynh Quoc-Dat Huynh Tan-Tai Trieu 《Theoretical & Applied Mechanics Letters》 2025年第3期284-295,共12页
Machine learning(ML)models are widely used for predicting undrained shear strength(USS),but interpretability has been a limitation in various studies.Therefore,this study introduced shapley additive explanations(SHAP)... Machine learning(ML)models are widely used for predicting undrained shear strength(USS),but interpretability has been a limitation in various studies.Therefore,this study introduced shapley additive explanations(SHAP)to clarify the contribution of each input feature in USS prediction.Three ML models,artificial neural network(ANN),extreme gradient boosting(XGBoost),and random forest(RF),were employed,with accuracy evaluated using mean squared error,mean absolute error,and coefficient of determination(R^(2)).The RF achieved the highest performance with an R^(2) of 0.82.SHAP analysis identified pre-consolidation stress as a key contributor to USS prediction.SHAP dependence plots reveal that the ANN captures smoother,linear feature-output relationships,while the RF handles complex,non-linear interactions more effectively.This suggests a non-linear relationship between USS and input features,with RF outperforming ANN.These findings highlight SHAP’s role in enhancing interpretability and promoting transparency and reliability in ML predictions for geotechnical applications. 展开更多
关键词 Prediction of undrained shear strength Explanation model Shapley additive explanation model explainable AI
在线阅读 下载PDF
Graph-Based Intrusion Detection with Explainable Edge Classification Learning
8
作者 Jaeho Shin Jaekwang Kim 《Computers, Materials & Continua》 2026年第1期610-635,共26页
Network attacks have become a critical issue in the internet security domain.Artificial intelligence technology-based detection methodologies have attracted attention;however,recent studies have struggled to adapt to ... Network attacks have become a critical issue in the internet security domain.Artificial intelligence technology-based detection methodologies have attracted attention;however,recent studies have struggled to adapt to changing attack patterns and complex network environments.In addition,it is difficult to explain the detection results logically using artificial intelligence.We propose a method for classifying network attacks using graph models to explain the detection results.First,we reconstruct the network packet data into a graphical structure.We then use a graph model to predict network attacks using edge classification.To explain the prediction results,we observed numerical changes by randomly masking and calculating the importance of neighbors,allowing us to extract significant subgraphs.Our experiments on six public datasets demonstrate superior performance with an average F1-score of 0.960 and accuracy of 0.964,outperforming traditional machine learning and other graph models.The visual representation of the extracted subgraphs highlights the neighboring nodes that have the greatest impact on the results,thus explaining detection.In conclusion,this study demonstrates that graph-based models are suitable for network attack detection in complex environments,and the importance of graph neighbors can be calculated to efficiently analyze the results.This approach can contribute to real-world network security analyses and provide a new direction in the field. 展开更多
关键词 Intrusion detection graph neural network explainable AI network attacks GraphSAGE
在线阅读 下载PDF
Cascading Class Activation Mapping:A Counterfactual Reasoning-Based Explainable Method for Comprehensive Feature Discovery
9
作者 Seoyeon Choi Hayoung Kim Guebin Choi 《Computer Modeling in Engineering & Sciences》 2026年第2期1043-1069,共27页
Most Convolutional Neural Network(CNN)interpretation techniques visualize only the dominant cues that the model relies on,but there is no guarantee that these represent all the evidence the model uses for classificati... Most Convolutional Neural Network(CNN)interpretation techniques visualize only the dominant cues that the model relies on,but there is no guarantee that these represent all the evidence the model uses for classification.This limitation becomes critical when hidden secondary cues—potentially more meaningful than the visualized ones—remain undiscovered.This study introduces CasCAM(Cascaded Class Activation Mapping)to address this fundamental limitation through counterfactual reasoning.By asking“if this dominant cue were absent,what other evidence would the model use?”,CasCAM progressively masks the most salient features and systematically uncovers the hierarchy of classification evidence hidden beneath them.Experimental results demonstrate that CasCAM effectively discovers the full spectrum of reasoning evidence and can be universally applied with nine existing interpretation methods. 展开更多
关键词 explainable AI class activation mapping counterfactual reasoning shortcut learning feature discovery
在线阅读 下载PDF
Transforming Healthcare with State-of-the-Art Medical-LLMs:A Comprehensive Evaluation of Current Advances Using Benchmarking Framework
10
作者 Himadri Nath Saha Dipanwita Chakraborty Bhattacharya +5 位作者 Sancharita Dutta Arnab Bera Srutorshi Basuray Satyasaran Changdar Saptarshi Banerjee Jon Turdiev 《Computers, Materials & Continua》 2026年第2期234-289,共56页
The emergence of Medical Large Language Models has significantly transformed healthcare.Medical Large Language Models(Med-LLMs)serve as transformative tools that enhance clinical practice through applications in decis... The emergence of Medical Large Language Models has significantly transformed healthcare.Medical Large Language Models(Med-LLMs)serve as transformative tools that enhance clinical practice through applications in decision support,documentation,and diagnostics.This evaluation examines the performance of leading Med-LLMs,including GPT-4Med,Med-PaLM,MEDITRON,PubMedGPT,and MedAlpaca,across diverse medical datasets.It provides graphical comparisons of their effectiveness in distinct healthcare domains.The study introduces a domain-specific categorization system that aligns these models with optimal applications in clinical decision-making,documentation,drug discovery,research,patient interaction,and public health.The paper addresses deployment challenges of Medical-LLMs,emphasizing trustworthiness and explainability as essential requirements for healthcare AI.It presents current evaluation techniques that improve model transparency in high-stakes medical contexts and analyzes regulatory frameworks using benchmarking datasets such asMedQA,MedMCQA,PubMedQA,and MIMIC.By identifying ongoing challenges in biasmitigation,reliability,and ethical compliance,thiswork serves as a resource for selecting appropriate Med-LLMs and outlines future directions in the field.This analysis offers a roadmap for developing Med-LLMs that balance technological innovation with the trust and transparency required for clinical integration,a perspective often overlooked in existing literature. 展开更多
关键词 Medical large language models(Med-LLM) AI in healthcare natural language processing(NLP)in medicine fine-tuning medical LLMs retrieval-augmented generation(RAG)in medicine multi-modal learning in healthcare explainability and transparency in medical AI FDA regulations for AI in medicine evaluation and benchmarking of medical large language models
在线阅读 下载PDF
The Chinese skeleton: insights into microstructure that help to explain the epidemiology of fracture 被引量:13
11
作者 Elaine Cong Marcella D Walker 《Bone Research》 SCIE CAS 2014年第2期80-92,共13页
Osteoporotic fractures are a major public health problem worldwide, but incidence varies greatly across racial groups and geographic regions. Recent work suggests that the incidence of osteoporotic fracture is rising ... Osteoporotic fractures are a major public health problem worldwide, but incidence varies greatly across racial groups and geographic regions. Recent work suggests that the incidence of osteoporotic fracture is rising among Asian populations. Studies comparing areal bone mineral density and fracture across races generally indicate lower bone mineral density in Asian individuals including the Chinese, but this does not reflect their relatively low risk of non-vertebral fractures. In contrast, the Chinese have relatively high vertebral fracture rates similar to that of Caucasians. The paradoxically low risk for some types of fractures among the Chinese despite their low areal bone mineral density has been elucidated in part by recent advances in skeletal imaging. New techniques for assessing bone quality non-invasively demonstrate that the Chinese compensate for smaller bone size by differences in hip geometry and microstructural skeletal organization. Studies evaluating factors influencing racial differences in bone remodeling, as well as bone acquisition and loss, may further elucidate racial variation in bone microstructure. Advances in understanding the microstructure of the Chinese skeleton have not only helped to explain the epidemiology of fracture in the Chinese, but may also provide insight into the epidemiology of fracture in other races as well. 展开更多
关键词 bone insights into microstructure that help to explain the epidemiology of fracture the Chinese skeleton
暂未订购
Co-selection may explain the unexpectedly high prevalence of plasmid-mediated colistin resistance gene mcr-1 in a Chinese broiler farm 被引量:7
12
作者 Yu-Ping Cao Qing-Qing Lin +6 位作者 Wan-Yun He Jing Wang Meng-Ying Yi Lu-Chao Lv Jun Yang Jian-Hua Liu Jian-Ying Guo 《Zoological Research》 SCIE CAS CSCD 2020年第5期569-575,共7页
DEAR EDITOR,The rise of the plasmid-encoded colistin resistance gene mcr-1 is a major concern globally.Here,during a routine surveillance,an unexpectedly high prevalence of Escherichia coli with reduced susceptibility... DEAR EDITOR,The rise of the plasmid-encoded colistin resistance gene mcr-1 is a major concern globally.Here,during a routine surveillance,an unexpectedly high prevalence of Escherichia coli with reduced susceptibility to colistin (69.9%) was observed in a Chinese broiler farm.Fifty-three (63.9%) E.coli isolates were positive for mcr-1.All identified mcr-1-positive E. 展开更多
关键词 globally ROUTINE explain
在线阅读 下载PDF
Explainable Artificial Intelligence-A New Step towards the Trust in Medical Diagnosis with AI Frameworks:A Review 被引量:1
13
作者 Nilkanth Mukund Deshpande Shilpa Gite +1 位作者 Biswajeet Pradhan Mazen Ebraheem Assiri 《Computer Modeling in Engineering & Sciences》 SCIE EI 2022年第12期843-872,共30页
Machine learning(ML)has emerged as a critical enabling tool in the sciences and industry in recent years.Today’s machine learning algorithms can achieve outstanding performance on an expanding variety of complex task... Machine learning(ML)has emerged as a critical enabling tool in the sciences and industry in recent years.Today’s machine learning algorithms can achieve outstanding performance on an expanding variety of complex tasks-thanks to advancements in technique,the availability of enormous databases,and improved computing power.Deep learning models are at the forefront of this advancement.However,because of their nested nonlinear structure,these strong models are termed as“black boxes,”as they provide no information about how they arrive at their conclusions.Such a lack of transparencies may be unacceptable in many applications,such as the medical domain.A lot of emphasis has recently been paid to the development of methods for visualizing,explaining,and interpreting deep learningmodels.The situation is substantially different in safety-critical applications.The lack of transparency of machine learning techniques may be limiting or even disqualifying issue in this case.Significantly,when single bad decisions can endanger human life and health(e.g.,autonomous driving,medical domain)or result in significant monetary losses(e.g.,algorithmic trading),depending on an unintelligible data-driven system may not be an option.This lack of transparency is one reason why machine learning in sectors like health is more cautious than in the consumer,e-commerce,or entertainment industries.Explainability is the term introduced in the preceding years.The AImodel’s black box nature will become explainable with these frameworks.Especially in the medical domain,diagnosing a particular disease through AI techniques would be less adapted for commercial use.These models’explainable natures will help them commercially in diagnosis decisions in the medical field.This paper explores the different frameworks for the explainability of AI models in the medical field.The available frameworks are compared with other parameters,and their suitability for medical fields is also discussed. 展开更多
关键词 Medical imaging explainability artificial intelligence XAI
在线阅读 下载PDF
Using Speech Recognition in Learning Primary School Mathematics via Explain, Instruct and Facilitate Techniques 被引量:1
14
作者 Ab Rahman Ahmad Sami M. Halawani Samir K. Boucetta 《Journal of Software Engineering and Applications》 2014年第4期233-255,共23页
The application of Information and Communication Technologies has transformed traditional Teaching and Learning in the past decade to computerized-based era. This evolution has resulted from the emergence of the digit... The application of Information and Communication Technologies has transformed traditional Teaching and Learning in the past decade to computerized-based era. This evolution has resulted from the emergence of the digital system and has greatly impacted on the global education and socio-cultural development. Multimedia has been absorbed into the education sector for producing a new learning concept and a combination of educational and entertainment approach. This research is concerned with the application of Window Speech Recognition and Microsoft Visual Basic 2008 Integrated/Interactive Development Environment in Multimedia-Assisted Courseware prototype development for Primary School Mathematics contents, namely, single digits and the addition. The Teaching and Learning techniques—Explain, Instruct and Facilitate are proposed and these could be viewed as instructors’ centered strategy, instructors’—learners’ dual communication and learners' active participation. The prototype is called M-EIF and deployed only users' voices;hence the activation of Window Speech Recognition is required prior to a test run. 展开更多
关键词 explain Instruct and Facilitate TECHNIQUES MULTIMEDIA-ASSISTED COURSEWARE Primary School Mathematics Visual Natural Language WINDOW Speech Recognition
暂未订购
A Study on the Explainability of Thyroid Cancer Prediction:SHAP Values and Association-Rule Based Feature Integration Framework
15
作者 Sujithra Sankar S.Sathyalakshmi 《Computers, Materials & Continua》 SCIE EI 2024年第5期3111-3138,共28页
In the era of advanced machine learning techniques,the development of accurate predictive models for complex medical conditions,such as thyroid cancer,has shown remarkable progress.Accurate predictivemodels for thyroi... In the era of advanced machine learning techniques,the development of accurate predictive models for complex medical conditions,such as thyroid cancer,has shown remarkable progress.Accurate predictivemodels for thyroid cancer enhance early detection,improve resource allocation,and reduce overtreatment.However,the widespread adoption of these models in clinical practice demands predictive performance along with interpretability and transparency.This paper proposes a novel association-rule based feature-integratedmachine learning model which shows better classification and prediction accuracy than present state-of-the-artmodels.Our study also focuses on the application of SHapley Additive exPlanations(SHAP)values as a powerful tool for explaining thyroid cancer prediction models.In the proposed method,the association-rule based feature integration framework identifies frequently occurring attribute combinations in the dataset.The original dataset is used in trainingmachine learning models,and further used in generating SHAP values fromthesemodels.In the next phase,the dataset is integrated with the dominant feature sets identified through association-rule based analysis.This new integrated dataset is used in re-training the machine learning models.The new SHAP values generated from these models help in validating the contributions of feature sets in predicting malignancy.The conventional machine learning models lack interpretability,which can hinder their integration into clinical decision-making systems.In this study,the SHAP values are introduced along with association-rule based feature integration as a comprehensive framework for understanding the contributions of feature sets inmodelling the predictions.The study discusses the importance of reliable predictive models for early diagnosis of thyroid cancer,and a validation framework of explainability.The proposed model shows an accuracy of 93.48%.Performance metrics such as precision,recall,F1-score,and the area under the receiver operating characteristic(AUROC)are also higher than the baseline models.The results of the proposed model help us identify the dominant feature sets that impact thyroid cancer classification and prediction.The features{calcification}and{shape}consistently emerged as the top-ranked features associated with thyroid malignancy,in both association-rule based interestingnessmetric values and SHAPmethods.The paper highlights the potential of the rule-based integrated models with SHAP in bridging the gap between the machine learning predictions and the interpretability of this prediction which is required for real-world medical applications. 展开更多
关键词 explainable AI machine learning clinical decision support systems thyroid cancer association-rule based framework SHAP values classification and prediction
暂未订购
How to Explain the Meanings of Words in Middle School English Teaching
16
作者 WangHaisheng 《湖州师范学院学报》 1990年第1期35-37,共3页
New words are constantly coming into a language.Science and technol-ogy are exploring into some new field,rich and more expensive.Old wordsget new meanings too."Heavy"used to be an adjective,meaning,"Ha... New words are constantly coming into a language.Science and technol-ogy are exploring into some new field,rich and more expensive.Old wordsget new meanings too."Heavy"used to be an adjective,meaning,"Havingweight".It is now used also as a noun,meaning"a big shot".The student haveto struggle against learning words throughout the course from the very be-ginning to the very end of the course.English has a very large vocabulary.It is utterly impossible for a foreign language student to learn all the wordsin a language.So in English teaching teachers have to select the most use-ful words for students to learn.Teachers should take care to use the 展开更多
关键词 VOCABULARY STRUGGLE exploring ginning constantly expensive THROUGHOUT impossible CONVEYING explain
在线阅读 下载PDF
Unveiling dominant factors for gully distribution in wildfire-affected areas using explainable AI:A case study of Xiangjiao catchment,Southwest China 被引量:2
17
作者 ZHOU Ruichen HU Xiewen +3 位作者 XI Chuanjie HE Kun DENG Lin LUO Gang 《Journal of Mountain Science》 2025年第8期2765-2792,共28页
Wildfires significantly disrupt the physical and hydrologic conditions of the environment,leading to vegetation loss and altered surface geo-material properties.These complex dynamics promote post-fire gully erosion,y... Wildfires significantly disrupt the physical and hydrologic conditions of the environment,leading to vegetation loss and altered surface geo-material properties.These complex dynamics promote post-fire gully erosion,yet the key conditioning factors(e.g.,topography,hydrology)remain insufficiently understood.This study proposes a novel artificial intelligence(AI)framework that integrates four machine learning(ML)models with Shapley Additive Explanations(SHAP)method,offering a hierarchical perspective from global to local on the dominant factors controlling gully distribution in wildfireaffected areas.In a case study of Xiangjiao catchment burned on March 28,2020,in Muli County in Sichuan Province of Southwest China,we derived 21 geoenvironmental factors to assess the susceptibility of post-fire gully erosion using logistic regression(LR),support vector machine(SVM),random forest(RF),and convolutional neural network(CNN)models.SHAP-based model interpretation revealed eight key conditioning factors:topographic position index(TPI),topographic wetness index(TWI),distance to stream,mean annual precipitation,differenced normalized burn ratio(d NBR),land use/cover,soil type,and distance to road.Comparative model evaluation demonstrated that reduced-variable models incorporating these dominant factors achieved accuracy comparable to that of the initial-variable models,with AUC values exceeding 0.868 across all ML algorithms.These findings provide critical insights into gully erosion behavior in wildfire-affected areas,supporting the decision-making process behind environmental management and hazard mitigation. 展开更多
关键词 Gully erosion susceptibility explainable AI WILDFIRE Geo-environmental factor Machine learning
原文传递
High-throughput screening of CO_(2) cycloaddition MOF catalyst with an explainable machine learning model
18
作者 Xuefeng Bai Yi Li +3 位作者 Yabo Xie Qiancheng Chen Xin Zhang Jian-Rong Li 《Green Energy & Environment》 SCIE EI CAS 2025年第1期132-138,共7页
The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF str... The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF structure database is available.In this study,we report a machine learning model for high-throughput screening of MOF catalysts for the CO_(2) cycloaddition reaction.The descriptors for model training were judiciously chosen according to the reaction mechanism,which leads to high accuracy up to 97%for the 75%quantile of the training set as the classification criterion.The feature contribution was further evaluated with SHAP and PDP analysis to provide a certain physical understanding.12,415 hypothetical MOF structures and 100 reported MOFs were evaluated under 100℃ and 1 bar within one day using the model,and 239 potentially efficient catalysts were discovered.Among them,MOF-76(Y)achieved the top performance experimentally among reported MOFs,in good agreement with the prediction. 展开更多
关键词 Metal-organic frameworks High-throughput screening Machine learning explainable model CO_(2)cycloaddition
在线阅读 下载PDF
Intrumer:A Multi Module Distributed Explainable IDS/IPS for Securing Cloud Environment
19
作者 Nazreen Banu A S.K.B.Sangeetha 《Computers, Materials & Continua》 SCIE EI 2025年第1期579-607,共29页
The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic.Cloud environments pose significant challenges in maintaining privacy and security.Global approach... The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic.Cloud environments pose significant challenges in maintaining privacy and security.Global approaches,such as IDS,have been developed to tackle these issues.However,most conventional Intrusion Detection System(IDS)models struggle with unseen cyberattacks and complex high-dimensional data.In fact,this paper introduces the idea of a novel distributed explainable and heterogeneous transformer-based intrusion detection system,named INTRUMER,which offers balanced accuracy,reliability,and security in cloud settings bymultiplemodulesworking together within it.The traffic captured from cloud devices is first passed to the TC&TM module in which the Falcon Optimization Algorithm optimizes the feature selection process,and Naie Bayes algorithm performs the classification of features.The selected features are classified further and are forwarded to the Heterogeneous Attention Transformer(HAT)module.In this module,the contextual interactions of the network traffic are taken into account to classify them as normal or malicious traffic.The classified results are further analyzed by the Explainable Prevention Module(XPM)to ensure trustworthiness by providing interpretable decisions.With the explanations fromthe classifier,emergency alarms are transmitted to nearby IDSmodules,servers,and underlying cloud devices for the enhancement of preventive measures.Extensive experiments on benchmark IDS datasets CICIDS 2017,Honeypots,and NSL-KDD were conducted to demonstrate the efficiency of the INTRUMER model in detecting network trafficwith high accuracy for different types.Theproposedmodel outperforms state-of-the-art approaches,obtaining better performance metrics:98.7%accuracy,97.5%precision,96.3%recall,and 97.8%F1-score.Such results validate the robustness and effectiveness of INTRUMER in securing diverse cloud environments against sophisticated cyber threats. 展开更多
关键词 Cloud computing intrusion detection system TRANSFORMERS and explainable artificial intelligence(XAI)
在线阅读 下载PDF
Vice Minister of Information Explains the Accounting Policy of China Telecom
20
《China's Foreign Trade》 2000年第9期19-20,共2页
关键词 Vice Minister of Information explains the Accounting Policy of China Telecom
在线阅读 下载PDF
上一页 1 2 27 下一页 到第
使用帮助 返回顶部