期刊文献+
共找到1,220篇文章
< 1 2 61 >
每页显示 20 50 100
Computation Tree Logic Model Checking of Multi-Agent Systems Based on Fuzzy Epistemic Interpreted Systems
1
作者 Xia Li Zhanyou Ma +3 位作者 Zhibao Mian Ziyuan Liu Ruiqi Huang Nana He 《Computers, Materials & Continua》 SCIE EI 2024年第3期4129-4152,共24页
Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as s... Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as safety and liveness,there is still a lack of quantitative and uncertain property verifications for these systems.In uncertain environments,agents must make judicious decisions based on subjective epistemic.To verify epistemic and measurable properties in multi-agent systems,this paper extends fuzzy computation tree logic by introducing epistemic modalities and proposing a new Fuzzy Computation Tree Logic of Knowledge(FCTLK).We represent fuzzy multi-agent systems as distributed knowledge bases with fuzzy epistemic interpreted systems.In addition,we provide a transformation algorithm from fuzzy epistemic interpreted systems to fuzzy Kripke structures,as well as transformation rules from FCTLK formulas to Fuzzy Computation Tree Logic(FCTL)formulas.Accordingly,we transform the FCTLK model checking problem into the FCTL model checking.This enables the verification of FCTLK formulas by using the fuzzy model checking algorithm of FCTL without additional computational overheads.Finally,we present correctness proofs and complexity analyses of the proposed algorithms.Additionally,we further illustrate the practical application of our approach through an example of a train control system. 展开更多
关键词 Model checking multi-agent systems fuzzy epistemic interpreted systems fuzzy computation tree logic transformation algorithm
在线阅读 下载PDF
Engine Failure Prediction on Large-Scale CMAPSS Data Using Hybrid Feature Selection and Imbalance-Aware Learning
2
作者 Ahmad Junaid Abid Iqbal +3 位作者 Abuzar Khan Ghassan Husnain Abdul-Rahim Ahmad Mohammed Al-Naeem 《Computers, Materials & Continua》 2026年第4期1485-1508,共24页
Most predictive maintenance studies have emphasized accuracy but provide very little focus on Interpretability or deployment readiness.This study improves on prior methods by developing a small yet robust system that ... Most predictive maintenance studies have emphasized accuracy but provide very little focus on Interpretability or deployment readiness.This study improves on prior methods by developing a small yet robust system that can predict when turbofan engines will fail.It uses the NASA CMAPSS dataset,which has over 200,000 engine cycles from260 engines.The process begins with systematic preprocessing,which includes imputation,outlier removal,scaling,and labelling of the remaining useful life.Dimensionality is reduced using a hybrid selection method that combines variance filtering,recursive elimination,and gradient-boosted importance scores,yielding a stable set of 10 informative sensors.To mitigate class imbalance,minority cases are oversampled,and class-weighted losses are applied during training.Benchmarking is carried out with logistic regression,gradient boosting,and a recurrent design that integrates gated recurrent units with long short-term memory networks.The Long Short-Term Memory–Gated Recurrent Unit(LSTM–GRU)hybrid achieved the strongest performance with an F1 score of 0.92,precision of 0.93,recall of 0.91,ReceiverOperating Characteristic–AreaUnder the Curve(ROC-AUC)of 0.97,andminority recall of 0.75.Interpretability testing using permutation importance and Shapley values indicates that sensors 13,15,and 11 are the most important indicators of engine wear.The proposed system combines imbalance handling,feature reduction,and Interpretability into a practical design suitable for real industrial settings. 展开更多
关键词 Predictive maintenance CMAPSS dataset feature selection class imbalance LSTM-GRUhybrid model INTERPRETABILITY industrial deployment
在线阅读 下载PDF
Computational Modeling for Mortality Prediction in Medical Sciences Based on a Proto-Digital Twin Framework
3
作者 Victor Leiva Carlos Martin-Barreiro Viviana Giampaoli 《Computer Modeling in Engineering & Sciences》 2026年第2期1100-1141,共42页
Mortality prediction in respiratory health is challenging,especially when using large-scale clinical datasets composed primarily of categorical variables.Traditional digital twin(DT)frameworks often rely on longi-tudi... Mortality prediction in respiratory health is challenging,especially when using large-scale clinical datasets composed primarily of categorical variables.Traditional digital twin(DT)frameworks often rely on longi-tudinal or sensor-based data,which are not always available in public health contexts.In this article,we propose a novel proto-DT framework for mortality prediction in respiratory health using a large-scale categorical biomedical dataset.This dataset contains 415,711 severe acute respiratory infection cases from the Brazilian Unified Health System,including both COVID-19 and non-COVID-19 patients.Four classification models—extreme gradient boosting(XGBoost),logistic regression,random forest,and a deep neural network(DNN)—are trained using cost-sensitive learning to address class imbalance.The models are evaluated using accuracy,precision,recall,F1-score,and area under the curve(AUC)related to the receiver operating characteristic(ROC).The framework supports simulated interventions by modifying selected inputs and recalculating predicted mortality.Additionally,we incorporate multiple correspondence analysis and K-means clustering to explore model sensitivity.A Python library has been developed to ensure reproducibility.All models achieve AUC-ROC values near or above 0.85.XGBoost yields the highest accuracy(0.84),while the DNN achieves the highest recall(0.81).Scenario-based simulations reveal how key clinical factors,such as intensive care unit admission and oxygen support,affect predicted outcomes.The proposed proto-DT framework demonstrates the feasibility of mortality prediction and intervention simulation using categorical data alone.This framework provides a foundation for data-driven explainable DTs in public health,even in the absence of time-series data. 展开更多
关键词 Clinical decision support cross-sectional analysis COVID-19 imbalanced classification interpretable machine learning scenario-based simulation
在线阅读 下载PDF
A Robot Grasp Detection Method Based on Neural Architecture Search and Its Interpretability Analysis
4
作者 Lu Rong Manyu Xu +5 位作者 Wenbo Zhu Zhihao Yang Chao Dong Yunzhi Zhang Kai Wang Bing Zheng 《Computers, Materials & Continua》 2026年第4期1282-1306,共25页
Deep learning has become integral to robotics,particularly in tasks such as robotic grasping,where objects often exhibit diverse shapes,textures,and physical properties.In robotic grasping tasks,due to the diverse cha... Deep learning has become integral to robotics,particularly in tasks such as robotic grasping,where objects often exhibit diverse shapes,textures,and physical properties.In robotic grasping tasks,due to the diverse characteristics of the targets,frequent adjustments to the network architecture and parameters are required to avoid a decrease in model accuracy,which presents a significant challenge for non-experts.Neural Architecture Search(NAS)provides a compelling method through the automated generation of network architectures,enabling the discovery of models that achieve high accuracy through efficient search algorithms.Compared to manually designed networks,NAS methods can significantly reduce design costs,time expenditure,and improve model performance.However,such methods often involve complex topological connections,and these redundant structures can severely reduce computational efficiency.To overcome this challenge,this work puts forward a robotic grasp detection framework founded on NAS.The method automatically designs a lightweight network with high accuracy and low topological complexity,effectively adapting to the target object to generate the optimal grasp pose,thereby significantly improving the success rate of robotic grasping.Additionally,we use Class Activation Mapping(CAM)as an interpretability tool,which captures sensitive information during the perception process through visualized results.The searched model achieved competitive,and in some cases superior,performance on the Cornell and Jacquard public datasets,achieving accuracies of 98.3%and 96.8%,respectively,while sustaining a detection speed of 89 frames per second with only 0.41 million parameters.To further validate its effectiveness beyond benchmark evaluations,we conducted real-world grasping experiments on a UR5 robotic arm,where the model demonstrated reliable performance across diverse objects and high grasp success rates,thereby confirming its practical applicability in robotic manipulation tasks. 展开更多
关键词 Robotics grasping detection neural architecture search neural network interpretability
在线阅读 下载PDF
The Transparency Revolution in Geohazard Science:A Systematic Review and Research Roadmap for Explainable Artificial Intelligence
5
作者 Moein Tosan Vahid Nourani +5 位作者 Ozgur Kisi Yongqiang Zhang Sameh A.Kantoush Mekonnen Gebremichael Ruhollah Taghizadeh-Mehrjardi Jinhui Jeanne Huang 《Computer Modeling in Engineering & Sciences》 2026年第1期77-117,共41页
The integration of machine learning(ML)into geohazard assessment has successfully instigated a paradigm shift,leading to the production of models that possess a level of predictive accuracy previously considered unatt... The integration of machine learning(ML)into geohazard assessment has successfully instigated a paradigm shift,leading to the production of models that possess a level of predictive accuracy previously considered unattainable.However,the black-box nature of these systems presents a significant barrier,hindering their operational adoption,regulatory approval,and full scientific validation.This paper provides a systematic review and synthesis of the emerging field of explainable artificial intelligence(XAI)as applied to geohazard science(GeoXAI),a domain that aims to resolve the long-standing trade-off between model performance and interpretability.A rigorous synthesis of 87 foundational studies is used to map the intellectual and methodological contours of this rapidly expanding field.The analysis reveals that current research efforts are concentrated predominantly on landslide and flood assessment.Methodologically,tree-based ensembles and deep learning models dominate the literature,with SHapley Additive exPlanations(SHAP)frequently adopted as the principal post-hoc explanation technique.More importantly,the review further documents how the role of XAI has shifted:rather than being used solely as a tool for interpreting models after training,it is increasingly integrated into the modeling cycle itself.Recent applications include its use in feature selection,adaptive sampling strategies,and model evaluation.The evidence also shows that GeoXAI extends beyond producing feature rankings.It reveals nonlinear thresholds and interaction effects that generate deeper mechanistic insights into hazard processes and mechanisms.Nevertheless,several key challenges remain unresolved within the field.These persistent issues are especially pronounced when considering the crucial necessity for interpretation stability,the demanding scholarly task of reliably distinguishing correlation from causation,and the development of appropriate methods for the treatment of complex spatio-temporal dynamics. 展开更多
关键词 Explainable artificial intelligence(XAI) geohazard assessment machine learning SHAP trustworthy AI model interpretability
在线阅读 下载PDF
Integration of interpretable machine learning and MT-InSAR for dynamic enhancement of landslide susceptibility in the Three Gorges Reservoir Area
6
作者 Fancheng Zhao Fasheng Miao +3 位作者 Yiping Wu Shunqi Gong Zhao Qian Guyue Zheng 《Journal of Rock Mechanics and Geotechnical Engineering》 2026年第2期1193-1212,共20页
Landslide susceptibility mapping(LSM)is an essential tool for mitigating the escalating global risk of landslides.However,challenges such as the heterogeneity of different landslide triggers,extensive engineering acti... Landslide susceptibility mapping(LSM)is an essential tool for mitigating the escalating global risk of landslides.However,challenges such as the heterogeneity of different landslide triggers,extensive engineering activities exacerbated reactivation,and the interpretability of data-driven models have hindered the practical application of LSM.This work proposes a novel framework for enhancing LSM considering different triggers for accumulation and rock landslides,leveraging interpretable machine learning and Multi-temporal Interferometric Synthetic Aperture Radar(MT-InSAR)technology.Initially,a refined fieldinvestigation was conducted to delineate the accumulation and rock area according to landslide types,leading to the identificationof relevant contributing factors.Deformation along the slope was then combined with time-series analysis to derive a landslide activity level(AL)index to recognize the likelihood of reactivation or dormancy.The SHapley Additive exPlanation(SHAP)technique facilitated the interpretation of factors and the identificationof determinants in high susceptibility areas.The results indicate that random forest(RF)outperformed other models in both accumulation and rock areas.Key factors including thickness and weak intercalation were identifiedfor accumulation and rock landslides.The introduction of AL substantially enhanced the predictive capability of the LSM and outperformed models that neglect movement trends or deformation rates with an average ratio of 81.23%in high susceptibility zones.Besides,the fieldvalidation confirmedthat 83.8%of newly identifiedlandslides were correctly upgraded.Given its efficiencyand operational simplicity,the proposed hybrid model opens new avenues for the feasibility of enhancement in LSM at urban settlements worldwide. 展开更多
关键词 LANDSLIDE Susceptibility Interpretable machine learning Multi-temporal interferometric synthetic Aperture radar(MT-InSAR) The three Gorges reservoir Area
在线阅读 下载PDF
A Deep Learning Framework for Heart Disease Prediction with Explainable Artificial Intelligence
7
作者 Muhammad Adil Nadeem Javaid +2 位作者 Imran Ahmed Abrar Ahmed Nabil Alrajeh 《Computers, Materials & Continua》 2026年第1期1944-1963,共20页
Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learni... Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learning(DL)approaches often face several limitations,including inefficient feature extraction,class imbalance,suboptimal classification performance,and limited interpretability,which collectively hinder their deployment in clinical settings.To address these challenges,we propose a novel DL framework for heart disease prediction that integrates a comprehensive preprocessing pipeline with an advanced classification architecture.The preprocessing stage involves label encoding and feature scaling.To address the issue of class imbalance inherent in the personal key indicators of the heart disease dataset,the localized random affine shadowsampling technique is employed,which enhances minority class representation while minimizing overfitting.At the core of the framework lies the Deep Residual Network(DeepResNet),which employs hierarchical residual transformations to facilitate efficient feature extraction and capture complex,non-linear relationships in the data.Experimental results demonstrate that the proposed model significantly outperforms existing techniques,achieving improvements of 3.26%in accuracy,3.16%in area under the receiver operating characteristics,1.09%in recall,and 1.07%in F1-score.Furthermore,robustness is validated using 10-fold crossvalidation,confirming the model’s generalizability across diverse data distributions.Moreover,model interpretability is ensured through the integration of Shapley additive explanations and local interpretable model-agnostic explanations,offering valuable insights into the contribution of individual features to model predictions.Overall,the proposed DL framework presents a robust,interpretable,and clinically applicable solution for heart disease prediction. 展开更多
关键词 Heart disease deep learning localized random affine shadowsampling local interpretable modelagnostic explanations shapley additive explanations 10-fold cross-validation
在线阅读 下载PDF
AI ethics in geoscience:Toward trustworthy and responsible innovation
8
作者 Jinran Wu Xin Tian +8 位作者 You-Gan Wang Tong Li Qingyang Liu Yayong Li Lizhen Cui Zhuangcai Tian Jing Xu Xianzhou Lyu Yuming Mo 《Geography and Sustainability》 2026年第1期249-252,共4页
1.Introduction Artificial intelligence(AI)is rapidly reshaping geoscience,from Earth observation interpretation and hazard forecasting to subsurface characterisation and Earth system modelling(Kochupillai et al.,2022;... 1.Introduction Artificial intelligence(AI)is rapidly reshaping geoscience,from Earth observation interpretation and hazard forecasting to subsurface characterisation and Earth system modelling(Kochupillai et al.,2022;Sun et al.,2024).These capabilities emerge at a time when geoscientific evidence is increasingly informing high-stakes decisions about climate adaptation,resource development,and disaster risk reduction(McGovern et al.,2022). 展开更多
关键词 climate adaptationresource developmentand subsurface characterisation earth system modelling kochupillai hazard forecasting earth observation interpretation disaster risk reduction mcgovern artificial intelligence ai geoscientific evidence
在线阅读 下载PDF
Tunnel ahead prospecting methods and intelligent interpretation of adverse geology:A review
9
作者 Shucai Li Bin Liu +4 位作者 Lei Chen Huaifeng Sun Lichao Nie Zhengyu Liu Yuxiao Ren 《Journal of Rock Mechanics and Geotechnical Engineering》 2026年第1期1-19,共19页
Geological prospecting and the identification of adverse geological features are essential in tunnel construction,providing critical information to ensure safety and guide engineering decisions.As tunnel projects exte... Geological prospecting and the identification of adverse geological features are essential in tunnel construction,providing critical information to ensure safety and guide engineering decisions.As tunnel projects extend into deeper and more mountainous terrains,engineers face increasingly complex geological conditions,including high water pressure,intense geo-stress,elevated geothermal gradients,and active fault zones.These conditions pose substantial risks such as high-pressure water inrush,largescale collapses,and tunnel boring machine(TBM)blockages.Addressing these challenges requires advanced detection technologies capable of long-distance,high-precision,and intelligent assessments of adverse geology.This paper presents a comprehensive review of recent advancements in tunnel geological ahead prospecting methods.It summarizes the fundamental principles,technical maturity,key challenges,development trends,and real-world applications of various detection techniques.Airborne and semi-airborne geophysical methods enable large-scale reconnaissance for initial surveys in complex terrain.Tunnel-and borehole-based approaches offer high-resolution detection during excavation,including seismic ahead prospecting(SAP),TBM rock-breaking source seismic methods,fulltime-domain tunnel induced polarization(TIP),borehole electrical resistivity,and ground penetrating radar(GPR).To address scenarios involving multiple,coexisting adverse geologies,intelligent inversion and geological identification methods have been developed based on multi-source data fusion and artificial intelligence(AI)techniques.Overall,these advances significantly improve detection range,resolution,and geological characterization capabilities.The methods demonstrate strong adaptability to complex environments and provide reliable subsurface information,supporting safer and more efficient tunnel construction. 展开更多
关键词 Tunnel geological ahead prospecting Complex geological and environmental conditions Airborne geophysical methods Tunnel geophysical detection Borehole geophysical prospecting Intelligent geological interpretation
在线阅读 下载PDF
Explainable Ensemble Learning Framework for Early Detection of Autism Spectrum Disorder:Enhancing Trust,Interpretability and Reliability in AI-Driven Healthcare
10
作者 Menwa Alshammeri Noshina Tariq +2 位作者 NZ Jhanji Mamoona Humayun Muhammad Attique Khan 《Computer Modeling in Engineering & Sciences》 2026年第1期1233-1265,共33页
Artificial Intelligence(AI)is changing healthcare by helping with diagnosis.However,for doctors to trust AI tools,they need to be both accurate and easy to understand.In this study,we created a new machine learning sy... Artificial Intelligence(AI)is changing healthcare by helping with diagnosis.However,for doctors to trust AI tools,they need to be both accurate and easy to understand.In this study,we created a new machine learning system for the early detection of Autism Spectrum Disorder(ASD)in children.Our main goal was to build a model that is not only good at predicting ASD but also clear in its reasoning.For this,we combined several different models,including Random Forest,XGBoost,and Neural Networks,into a single,more powerful framework.We used two different types of datasets:(i)a standard behavioral dataset and(ii)a more complex multimodal dataset with images,audio,and physiological information.The datasets were carefully preprocessed for missing values,redundant features,and dataset imbalance to ensure fair learning.The results outperformed the state-of-the-art with a Regularized Neural Network,achieving 97.6%accuracy on behavioral data.Whereas,on the multimodal data,the accuracy is 98.2%.Other models also did well with accuracies consistently above 96%.We also used SHAP and LIME on a behavioral dataset for models’explainability. 展开更多
关键词 Autism spectrum disorder(ASD) artificial intelligence in healthcare explainable AI(XAI) ensemble learning machine learning early diagnosis model interpretability SHAP LIME predictive analytics ethical AI healthcare trustworthiness
在线阅读 下载PDF
Producing consistent visually interpreted land cover reference data: learning from feedback 被引量:1
11
作者 Agnieszka Tarko Nandin-Erdene Tsendbazar +1 位作者 Sytze de Bruin Arnold K.Bregt 《International Journal of Digital Earth》 SCIE 2021年第1期52-70,共19页
Reference data for large-scale land cover map are commonly acquired by visual interpretation of remotely sensed data.To assure consistency,multiple images are used,interpreters are trained,sites are interpreted by sev... Reference data for large-scale land cover map are commonly acquired by visual interpretation of remotely sensed data.To assure consistency,multiple images are used,interpreters are trained,sites are interpreted by several individuals,or the procedure includes a review.But little is known about important factors influencing the quality of visually interpreted data.We assessed the effect of multiple variables on land cover class agreement between interpreters and reviewers.Our analyses concerned data collected for validation of a global land cover map within the Copernicus Global Land Service project.Four cycles of visual interpretation were conducted,each was followed by review and feedback.Each interpreted site element was labelled according to dominant land cover type.We assessed relationships between the number of interpretation updates following feedback and the variables grouped in personal,training,and environmental categories.Variable importance was assessed using random forest regression.Personal variable interpreter identifier and training variable timestamp were found the strongest predictors of update counts,while the environmental variables complexity and image availability had least impact.Feedback loops reduced updating and hence improved consistency of the interpretations.Implementing feedback loops into the visually interpreted data collection increases the consistency of acquired land cover reference data. 展开更多
关键词 Land cover mapping learning curve VALIDATION visual interpretation
原文传递
Atmospheric scattering model and dark channel prior constraint network for environmental monitoring under hazy conditions 被引量:2
12
作者 Lintao Han Hengyi Lv +3 位作者 Chengshan Han Yuchen Zhao Qing Han Hailong Liu 《Journal of Environmental Sciences》 2025年第6期203-218,共16页
Environmentalmonitoring systems based on remote sensing technology have a wider monitoringrange and longer timeliness, which makes them widely used in the detection andmanagement of pollution sources. However, haze we... Environmentalmonitoring systems based on remote sensing technology have a wider monitoringrange and longer timeliness, which makes them widely used in the detection andmanagement of pollution sources. However, haze weather conditions degrade image qualityand reduce the precision of environmental monitoring systems. To address this problem,this research proposes a remote sensing image dehazingmethod based on the atmosphericscattering model and a dark channel prior constrained network. The method consists ofa dehazing network, a dark channel information injection network (DCIIN), and a transmissionmap network. Within the dehazing network, the branch fusion module optimizesfeature weights to enhance the dehazing effect. By leveraging dark channel information,the DCIIN enables high-quality estimation of the atmospheric veil. To ensure the outputof the deep learning model aligns with physical laws, we reconstruct the haze image usingthe prediction results from the three networks. Subsequently, we apply the traditionalloss function and dark channel loss function between the reconstructed haze image and theoriginal haze image. This approach enhances interpretability and reliabilitywhile maintainingadherence to physical principles. Furthermore, the network is trained on a synthesizednon-homogeneous haze remote sensing dataset using dark channel information from cloudmaps. The experimental results show that the proposed network can achieve better imagedehazing on both synthetic and real remote sensing images with non-homogeneous hazedistribution. This research provides a new idea for solving the problem of decreased accuracyof environmental monitoring systems under haze weather conditions and has strongpracticability. 展开更多
关键词 Remote sensing Image dehazing Environmental monitoring Neural network INTERPRETABILITY
原文传递
Forecasting landslide deformation by integrating domain knowledge into interpretable deep learning considering spatiotemporal correlations 被引量:2
13
作者 Zhengjing Ma Gang Mei 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第2期960-982,共23页
Forecasting landslide deformation is challenging due to influence of various internal and external factors on the occurrence of systemic and localized heterogeneities.Despite the potential to improve landslide predict... Forecasting landslide deformation is challenging due to influence of various internal and external factors on the occurrence of systemic and localized heterogeneities.Despite the potential to improve landslide predictability,deep learning has yet to be sufficiently explored for complex deformation patterns associated with landslides and is inherently opaque.Herein,we developed a holistic landslide deformation forecasting method that considers spatiotemporal correlations of landslide deformation by integrating domain knowledge into interpretable deep learning.By spatially capturing the interconnections between multiple deformations from different observation points,our method contributes to the understanding and forecasting of landslide systematic behavior.By integrating specific domain knowledge relevant to each observation point and merging internal properties with external variables,the local heterogeneity is considered in our method,identifying deformation temporal patterns in different landslide zones.Case studies involving reservoir-induced landslides and creeping landslides demonstrated that our approach(1)enhances the accuracy of landslide deformation forecasting,(2)identifies significant contributing factors and their influence on spatiotemporal deformation characteristics,and(3)demonstrates how identifying these factors and patterns facilitates landslide forecasting.Our research offers a promising and pragmatic pathway toward a deeper understanding and forecasting of complex landslide behaviors. 展开更多
关键词 GEOHAZARDS Landslide deformation forecasting Landslide predictability Knowledge infused deep learning interpretable machine learning Attention mechanism Transformer
在线阅读 下载PDF
Preoperative prediction of textbook outcome in intrahepatic cholangiocarcinoma by interpretable machine learning: A multicenter cohort study 被引量:1
14
作者 Ting-Feng Huang Cong Luo +9 位作者 Luo-Bin Guo Hong-Zhi Liu Jiang-Tao Li Qi-Zhu Lin Rui-Lin Fan Wei-Ping Zhou Jing-Dong Li Ke-Can Lin Shi-Chuan Tang Yong-Yi Zeng 《World Journal of Gastroenterology》 2025年第11期33-45,共13页
BACKGROUND To investigate the preoperative factors influencing textbook outcomes(TO)in Intrahepatic cholangiocarcinoma(ICC)patients and evaluate the feasibility of an interpretable machine learning model for preoperat... BACKGROUND To investigate the preoperative factors influencing textbook outcomes(TO)in Intrahepatic cholangiocarcinoma(ICC)patients and evaluate the feasibility of an interpretable machine learning model for preoperative prediction of TO,we developed a machine learning model for preoperative prediction of TO and used the SHapley Additive exPlanations(SHAP)technique to illustrate the prediction process.AIM To analyze the factors influencing textbook outcomes before surgery and to establish interpretable machine learning models for preoperative prediction.METHODS A total of 376 patients diagnosed with ICC were retrospectively collected from four major medical institutions in China,covering the period from 2011 to 2017.Logistic regression analysis was conducted to identify preoperative variables associated with achieving TO.Based on these variables,an EXtreme Gradient Boosting(XGBoost)machine learning prediction model was constructed using the XGBoost package.The SHAP(package:Shapviz)algorithm was employed to visualize each variable's contribution to the model's predictions.Kaplan-Meier survival analysis was performed to compare the prognostic differences between the TO-achieving and non-TO-achieving groups.RESULTS Among 376 patients,287 were included in the training group and 89 in the validation group.Logistic regression identified the following preoperative variables influencing TO:Child-Pugh classification,Eastern Cooperative Oncology Group(ECOG)score,hepatitis B,and tumor size.The XGBoost prediction model demonstrated high accuracy in internal validation(AUC=0.8825)and external validation(AUC=0.8346).Survival analysis revealed that the disease-free survival rates for patients achieving TO at 1,2,and 3 years were 64.2%,56.8%,and 43.4%,respectively.CONCLUSION Child-Pugh classification,ECOG score,hepatitis B,and tumor size are preoperative predictors of TO.In both the training group and the validation group,the machine learning model had certain effectiveness in predicting TO before surgery.The SHAP algorithm provided intuitive visualization of the machine learning prediction process,enhancing its interpretability. 展开更多
关键词 Intrahepatic cholangiocarcinoma Textbook outcome Interpretable machine learning PREDICTION PROGNOSIS
暂未订购
Artificial intelligence in natural products research 被引量:1
15
作者 Xiao Yuan Xiaobo Yang +3 位作者 Qiyuan Pan Cheng Luo Xin Luan Hao Zhang 《Chinese Journal of Natural Medicines》 2025年第11期1342-1357,共16页
Artificial intelligence(AI)has emerged as a transformative technology in accelerating drug discovery and development within natural medicines research.Natural medicines,characterized by their complex chemical composit... Artificial intelligence(AI)has emerged as a transformative technology in accelerating drug discovery and development within natural medicines research.Natural medicines,characterized by their complex chemical compositions and multifaceted pharmacological mechanisms,demonstrate widespread application in treating diverse diseases.However,research and development face significant challenges,including component complexity,extraction difficulties,and efficacy validation.AI technology,particularly through deep learning(DL)and machine learning(ML)approaches,enables efficient analysis of extensive datasets,facilitating drug screening,component analysis,and pharmacological mechanism elucidation.The implementation of AI technology demonstrates considerable potential in virtual screening,compound optimization,and synthetic pathway design,thereby enhancing natural medicines’bioavailability and safety profiles.Nevertheless,current applications encounter limitations regarding data quality,model interpretability,and ethical considerations.As AI technologies continue to evolve,natural medicines research and development will achieve greater efficiency and precision,advancing both personalized medicine and contemporary drug development approaches. 展开更多
关键词 Natural products Artificial intelligence Deep learning Drug discovery Model interpretability
原文传递
Knowledge Driven Machine Learning Towards Interpretable Intelligent Prognostics and Health Management:Review and Case Study 被引量:1
16
作者 Ruqiang Yan Zheng Zhou +6 位作者 Zuogang Shang Zhiying Wang Chenye Hu Yasong Li Yuangui Yang Xuefeng Chen Robert X.Gao 《Chinese Journal of Mechanical Engineering》 2025年第1期31-61,共31页
Despite significant progress in the Prognostics and Health Management(PHM)domain using pattern learning systems from data,machine learning(ML)still faces challenges related to limited generalization and weak interpret... Despite significant progress in the Prognostics and Health Management(PHM)domain using pattern learning systems from data,machine learning(ML)still faces challenges related to limited generalization and weak interpretability.A promising approach to overcoming these challenges is to embed domain knowledge into the ML pipeline,enhancing the model with additional pattern information.In this paper,we review the latest developments in PHM,encapsulated under the concept of Knowledge Driven Machine Learning(KDML).We propose a hierarchical framework to define KDML in PHM,which includes scientific paradigms,knowledge sources,knowledge representations,and knowledge embedding methods.Using this framework,we examine current research to demonstrate how various forms of knowledge can be integrated into the ML pipeline and provide roadmap to specific usage.Furthermore,we present several case studies that illustrate specific implementations of KDML in the PHM domain,including inductive experience,physical model,and signal processing.We analyze the improvements in generalization capability and interpretability that KDML can achieve.Finally,we discuss the challenges,potential applications,and usage recommendations of KDML in PHM,with a particular focus on the critical need for interpretability to ensure trustworthy deployment of artificial intelligence in PHM. 展开更多
关键词 PHM Knowledge driven machine learning Signal processing Physics informed INTERPRETABILITY
在线阅读 下载PDF
Interpretable machine learning excavates a low-alloyed magnesium alloy with strength-ductility synergy based on data augmentation and reconstruction 被引量:1
17
作者 Qinghang Wang Xu Qin +6 位作者 Shouxin Xia Li Wang Weiqi Wang Weiying Huang Yan Song Weineng Tang Daolun Chen 《Journal of Magnesium and Alloys》 2025年第6期2866-2883,共18页
The application of machine learning in alloy design is increasingly widespread,yet traditional models still face challenges when dealing with limited datasets and complex nonlinear relationships.This work proposes an ... The application of machine learning in alloy design is increasingly widespread,yet traditional models still face challenges when dealing with limited datasets and complex nonlinear relationships.This work proposes an interpretable machine learning method based on data augmentation and reconstruction,excavating high-performance low-alloyed magnesium(Mg)alloys.The data augmentation technique expands the original dataset through Gaussian noise.The data reconstruction method reorganizes and transforms the original data to extract more representative features,significantly improving the model's generalization ability and prediction accuracy,with a coefficient of determination(R^(2))of 95.9%for the ultimate tensile strength(UTS)model and a R^(2)of 95.3%for the elongation-to-failure(EL)model.The correlation coefficient assisted screening(CCAS)method is proposed to filter low-alloyed target alloys.A new Mg-2.2Mn-0.4Zn-0.2Al-0.2Ca(MZAX2000,wt%)alloy is designed and extruded into bar at given processing parameters,achieving room-temperature strength-ductility synergy showing an excellent UTS of 395 MPa and a high EL of 17.9%.This is closely related to its hetero-structured characteristic in the as-extruded MZAX2000 alloy consisting of coarse grains(16%),fine grains(75%),and fiber regions(9%).Therefore,this work offers new insights into optimizing alloy compositions and processing parameters for attaining new high strong and ductile low-alloyed Mg alloys. 展开更多
关键词 Magnesium alloy Interpretable machine learning Alloy design Hetero-structure Strength-ductility synergy
在线阅读 下载PDF
An Interpretable Few-Shot Framework for Fault Diagnosis of Train Transmission Systems with Noisy Labels 被引量:1
18
作者 Haiquan Qiu Biao Wang +4 位作者 Yong Qin Ao Ding Zhixin He Jing Liu Xin Huang 《Journal of Dynamics, Monitoring and Diagnostics》 2025年第1期65-75,共11页
Intelligent fault diagnosis technology plays an indispensable role in ensuring the safety,stability,and efficiency of railway operations.However,existing studies have the following limitations.1)They are typical black-... Intelligent fault diagnosis technology plays an indispensable role in ensuring the safety,stability,and efficiency of railway operations.However,existing studies have the following limitations.1)They are typical black-box models that lacks interpretability as well as they fuse features by simply stacking them,overlooking the discrepancies in the importance of different features,which reduces the credibility and diagnosis accuracy of the models.2)They ignore the effects of potentially mistaken labels in the training datasets disrupting the ability of the models to learn the true data distribution,which degrades the generalization performance of intelligent diagnosis models,especially when the training samples are limited.To address the above items,an interpretable few-shot framework for fault diagnosis with noisy labels is proposed for train transmission systems.In the proposed framework,a feature extractor is constructed by stacked frequency band focus modules,which can capture signal features in different frequency bands and further adaptively concentrate on the features corresponding to the potential fault characteristic frequency.Then,according to prototypical network,a novel metric-based classifier is developed that is tolerant to mislabeled support samples in the case of limited samples.Besides,a new loss function is designed to decrease the impact of label mistakes in query datasets.Finally,fault simulation experiments of subway train transmission systems are designed and conducted,and the effectiveness as well as superiority of the proposed method are proved by ablation experiments and comparison with the existing methods. 展开更多
关键词 few-shot learning intelligent fault diagnosis INTERPRETABILITY noisy labels train transmission systems
在线阅读 下载PDF
Prediction of BOF endpoint carbon content and temperature via CSSA-BP neural network model 被引量:1
19
作者 Xiao-feng Qiu Run-hao Zhang Jian Yang 《Journal of Iron and Steel Research International》 2025年第3期578-593,共16页
To predict the endpoint carbon content and temperature in basic oxygen furnace (BOF), the industrial parameters of BOF steelmaking are taken as input values. Firstly, a series of preprocessing works such as the Pauta ... To predict the endpoint carbon content and temperature in basic oxygen furnace (BOF), the industrial parameters of BOF steelmaking are taken as input values. Firstly, a series of preprocessing works such as the Pauta criterion, hierarchical clustering, and principal component analysis on the original data were performed. Secondly, the prediction results of classic machine learning models of ridge regression, support vector machine, gradient boosting regression (GBR), random forest regression, back-propagation (BP) neural network models, and multi-layer perceptron (MLP) were compared before and after data preprocessing. An improved model was established based on the improved sparrow algorithm and BP using tent chaotic mapping (CSSA-BP). The CSSA-BP model showed the best performance for endpoint carbon prediction with the lowest mean absolute error (MAE) and root mean square error (RMSE) values of 0.01124 and 0.01345 mass% among seven models, respectively. And the lowest MAE and RMSE values of 8.9839 and 10.9321 ℃ for endpoint temperature prediction were obtained among seven models, respectively. Furthermore, the CSSA-BP and GBR models have the smallest error fluctuation range in both endpoint carbon content and temperature predictions. Finally, in order to improve the interpretability of the model, SHapley additive interpretation (SHAP) was used to analyze the results. 展开更多
关键词 BOF steelmaking Principal component analysis Hierarchical clustering CSSA-BP SHapley additive interpretation
原文传递
TELL-Me:A time-series-decomposition-based ensembled lightweight learning model for diverse battery prognosis and diagnosis 被引量:1
20
作者 Kun-Yu Liu Ting-Ting Wang +2 位作者 Bo-Bo Zou Hong-Jie Peng Xinyan Liu 《Journal of Energy Chemistry》 2025年第7期1-8,共8页
As batteries become increasingly essential for energy storage technologies,battery prognosis,and diagnosis remain central to ensure reliable operation and effective management,as well as to aid the in-depth investigat... As batteries become increasingly essential for energy storage technologies,battery prognosis,and diagnosis remain central to ensure reliable operation and effective management,as well as to aid the in-depth investigation of degradation mechanisms.However,dynamic operating conditions,cell-to-cell inconsistencies,and limited availability of labeled data have posed significant challenges to accurate and robust prognosis and diagnosis.Herein,we introduce a time-series-decomposition-based ensembled lightweight learning model(TELL-Me),which employs a synergistic dual-module framework to facilitate accurate and reliable forecasting.The feature module formulates features with physical implications and sheds light on battery aging mechanisms,while the gradient module monitors capacity degradation rates and captures aging trend.TELL-Me achieves high accuracy in end-of-life prediction using minimal historical data from a single battery without requiring offline training dataset,and demonstrates impressive generality and robustness across various operating conditions and battery types.Additionally,by correlating feature contributions with degradation mechanisms across different datasets,TELL-Me is endowed with the diagnostic ability that not only enhances prediction reliability but also provides critical insights into the design and optimization of next-generation batteries. 展开更多
关键词 Battery prognosis Interpretable machine learning Degradation diagnosis Ensemble learning Online prediction Lightweight model
在线阅读 下载PDF
上一页 1 2 61 下一页 到第
使用帮助 返回顶部