Tungsten carbide-based(WC-based)cemented carbides are widely recognized as high-performance tool materials.Traditionally,single metals such as cobalt(Co)or nickel(Ni)serve as the binder phase,providing toughness and s...Tungsten carbide-based(WC-based)cemented carbides are widely recognized as high-performance tool materials.Traditionally,single metals such as cobalt(Co)or nickel(Ni)serve as the binder phase,providing toughness and structural integrity.Replacing this phase with high-entropy alloys(HEAs)offers a promising approach to enhancing mechanical properties and addressing sustainability challenges.However,the complex multi-element composition of HEAs complicates conventional experimental design,making it difficult to explore the vast compositional space efficiently.Traditional trial-and-error methods are time-consuming,resource-intensive,and often ineffective in identifying optimal compositions.In contrast,artificial intelligence(AI)-driven approaches enable rapid screening and optimization of alloy compositions,significantly improving predictive accuracy and interpretability.Feature selection techniques were employed to identify key alloying elements influencing hardness,toughness,and wear resistance.To enhance model interpretability,explainable artificial intelligence(XAI)techniques—SHapley Additive exPlanations(SHAP)and Local Interpretable Model-agnostic Explanations(LIME)—were applied to quantify the contributions of individual elements and uncover complex elemental interactions.Furthermore,a high-throughput machine learning(ML)–driven screening approach was implemented to optimize the binder phase composition,facilitating the discovery of HEAs with superiormechanical properties.Experimental validation demonstrated strong agreement between model predictions and measured performance,confirming the reliability of the ML framework.This study underscores the potential of integrating ML and XAI for data-driven materials design,providing a novel strategy for optimizing high-entropy cemented carbides.展开更多
While most modern machine learning methods offer speed and accuracy,few promise interpretability or explainability-two key features necessary for highly sensitive industries,like medicine,finance,and engineering.Using...While most modern machine learning methods offer speed and accuracy,few promise interpretability or explainability-two key features necessary for highly sensitive industries,like medicine,finance,and engineering.Using eight datasets representative of one especially sensitive industry,nuclear power,this work compares a traditional feedforward neural network(FNN)to a Kolmogorov-Arnold Network(KAN).We consider not only model performance and accuracy,but also interpretability through model architecture and explainability through a post-hoc SHapley Additive exPlanations(SHAP)analysis,a game-theory-based feature importance method.In terms of accuracy,we find KANs and FNNs comparable across all datasets when output dimensionality is limited.KANs,which transform into symbolic equations after training,yield perfectly interpretable models,while FNNs remain black-boxes.Finally,using the post-hoc explainability results from Kernel SHAP,we find that KANs learn real,physical relations from experimental data,while FNNs simply produce statistically accurate results.Overall,this analysis finds KANs a promising alternative to traditional machine learning methods,particularly in applications requiring both accuracy and comprehensibility.展开更多
Predicting the progression from Mild Cognitive Impairment(MCI)to Alzheimer's Disease(AD)is a critical challenge for enabling early intervention and improving patient outcomes.While longitudinal multi-modal neuroim...Predicting the progression from Mild Cognitive Impairment(MCI)to Alzheimer's Disease(AD)is a critical challenge for enabling early intervention and improving patient outcomes.While longitudinal multi-modal neuroimaging data holds immense potential for capturing the spatio-temporal dynamics of disease progression,its effective analysis is hampered by significant challenges:temporal heterogeneity(irregularly sampled scans),multi-modal misalignment,and the propensity of deep learning models to learn spurious,noncausal correlations.We propose CASCADE-Net,a novel end-to-end pipeline for robust and interpretable MCI-to-AD progression prediction.Our architecture introduces a Dynamic Temporal Alignment Module that employs a Neural Ordinary Differential Equation(Neural ODE)to model the continuous,underlying progression of pathology from irregularly sampled scans,effectively mapping heterogeneous patient data to a unified latent timeline.This aligned,noise-reduced spatio-temporal data is then processed by a predictive model featuring a novel Causal Spatial Attention mechanism.This mechanism not only identifies the critical brain regions and their evolution predictive of conversion but also incorporates a counterfactual constraint during training.This constraint ensures the learned features are causally linked to AD pathology by encouraging invariance to non-causal,confounder-based changes.Extensive experiments on the Alzheimer’s Disease Neuroimaging Initiative(ADNI)dataset demonstrate that CASCADE-Net significantly outperforms state-of-the-art sequential models in prognostic accuracy.Furthermore,our model provides highly interpretable,causally-grounded attention maps,offering valuable insights into the disease progression process and fostering greater clinical trust.展开更多
To the Editor:Artificial intelligence(AI)is revolutionizing the biomedical field by enabling advanced data analysis,predictive modeling,and personalized medicine,driving breakthroughs in diagnosis,treatment,and drug d...To the Editor:Artificial intelligence(AI)is revolutionizing the biomedical field by enabling advanced data analysis,predictive modeling,and personalized medicine,driving breakthroughs in diagnosis,treatment,and drug discovery.In pursuit of this goal,researchers are attempting to develop AI-based algorithms and establish models for use in clinical settings.Key challenges in this pursuit include ensuring the models’accuracy and consistency and addressing issues such as the interpretability of AI decisions,integration into existing clinical workflows,and ethical considerations like data privacy.Additionally,the AI model lies in the quality and diversity of training data—robust models require diverse and representative datasets to ensure generalizability across different patient populations,reduce dependence on extensive labeled data,and remain resilient to domain shifts,enabling adaptation to new and unseen cases.Nevertheless,this field continues to grow,especially in image-based AI models for diagnosing diseases,such as cardiovascular diseases.展开更多
With the advances in artificial intelligence(AI),data‐driven algorithms are becoming increasingly popular in the medical domain.However,due to the nonlinear and complex behavior of many of these algorithms,decision‐...With the advances in artificial intelligence(AI),data‐driven algorithms are becoming increasingly popular in the medical domain.However,due to the nonlinear and complex behavior of many of these algorithms,decision‐making by such algorithms is not trustworthy for clinicians and is considered a blackbox process.Hence,the scientific community has introduced explainable artificial intelligence(XAI)to remedy the problem.This systematic scoping review investigates the application of XAI in breast cancer detection and risk prediction.We conducted a comprehensive search on Scopus,IEEE Explore,PubMed,and Google Scholar(first 50 citations)using a systematic search strategy.The search spanned from January 2017 to July 2023,focusing on peer‐reviewed studies implementing XAI methods in breast cancer datasets.Thirty studies met our inclusion criteria and were included in the analysis.The results revealed that SHapley Additive exPlanations(SHAP)is the top model‐agnostic XAI technique in breast cancer research in terms of usage,explaining the model prediction results,diagnosis and classification of biomarkers,and prognosis and survival analysis.Additionally,the SHAP model primarily explained tree‐based ensemble machine learning models.The most common reason is that SHAP is model agnostic,which makes it both popular and useful for explaining any model prediction.Additionally,it is relatively easy to implement effectively and completely suits performant models,such as tree‐based models.Explainable AI improves the transparency,interpretability,fairness,and trustworthiness of AI‐enabled health systems and medical devices and,ultimately,the quality of care and outcomes.展开更多
Integrated energy system plays a crucial role in global carbon neutrality.Accurate dynamic modeling is essential for optimizing integrated energy system,requiring concurrent modeling of network topology and multi-ener...Integrated energy system plays a crucial role in global carbon neutrality.Accurate dynamic modeling is essential for optimizing integrated energy system,requiring concurrent modeling of network topology and multi-energy flow dynamics.Existing dynamic modeling approaches often struggle to solve dynamic characteristics with differential-algebraic coupling forms.With the rapid advancements in AI technologies,the integration of AI with energy systems has become not only a promising avenue but also a critical necessity for modeling the modern energy networks.This study innovatively integrates graph neural networks with physical principles,proposing an interpretable neural network methodology.The proposed energy-adapted graph to sequence model(EnG2S)represents a significant advancement for energy systems,pioneering the embedding of fluid dynamics theory to systematically reveal intrinsic connections between multi-energy flow dynamics and neural network charac-teristics.Overall,this study sets up a new paradigm for energy system modeling,broadening the boundaries of the integration between AI and energy systems.展开更多
文摘Tungsten carbide-based(WC-based)cemented carbides are widely recognized as high-performance tool materials.Traditionally,single metals such as cobalt(Co)or nickel(Ni)serve as the binder phase,providing toughness and structural integrity.Replacing this phase with high-entropy alloys(HEAs)offers a promising approach to enhancing mechanical properties and addressing sustainability challenges.However,the complex multi-element composition of HEAs complicates conventional experimental design,making it difficult to explore the vast compositional space efficiently.Traditional trial-and-error methods are time-consuming,resource-intensive,and often ineffective in identifying optimal compositions.In contrast,artificial intelligence(AI)-driven approaches enable rapid screening and optimization of alloy compositions,significantly improving predictive accuracy and interpretability.Feature selection techniques were employed to identify key alloying elements influencing hardness,toughness,and wear resistance.To enhance model interpretability,explainable artificial intelligence(XAI)techniques—SHapley Additive exPlanations(SHAP)and Local Interpretable Model-agnostic Explanations(LIME)—were applied to quantify the contributions of individual elements and uncover complex elemental interactions.Furthermore,a high-throughput machine learning(ML)–driven screening approach was implemented to optimize the binder phase composition,facilitating the discovery of HEAs with superiormechanical properties.Experimental validation demonstrated strong agreement between model predictions and measured performance,confirming the reliability of the ML framework.This study underscores the potential of integrating ML and XAI for data-driven materials design,providing a novel strategy for optimizing high-entropy cemented carbides.
基金funded by the U.S.Nuclear Regulatory Commis-sion’s University Nuclear Leadership Program for Research and Devel-opment,award number 31310024M0013.
文摘While most modern machine learning methods offer speed and accuracy,few promise interpretability or explainability-two key features necessary for highly sensitive industries,like medicine,finance,and engineering.Using eight datasets representative of one especially sensitive industry,nuclear power,this work compares a traditional feedforward neural network(FNN)to a Kolmogorov-Arnold Network(KAN).We consider not only model performance and accuracy,but also interpretability through model architecture and explainability through a post-hoc SHapley Additive exPlanations(SHAP)analysis,a game-theory-based feature importance method.In terms of accuracy,we find KANs and FNNs comparable across all datasets when output dimensionality is limited.KANs,which transform into symbolic equations after training,yield perfectly interpretable models,while FNNs remain black-boxes.Finally,using the post-hoc explainability results from Kernel SHAP,we find that KANs learn real,physical relations from experimental data,while FNNs simply produce statistically accurate results.Overall,this analysis finds KANs a promising alternative to traditional machine learning methods,particularly in applications requiring both accuracy and comprehensibility.
文摘Predicting the progression from Mild Cognitive Impairment(MCI)to Alzheimer's Disease(AD)is a critical challenge for enabling early intervention and improving patient outcomes.While longitudinal multi-modal neuroimaging data holds immense potential for capturing the spatio-temporal dynamics of disease progression,its effective analysis is hampered by significant challenges:temporal heterogeneity(irregularly sampled scans),multi-modal misalignment,and the propensity of deep learning models to learn spurious,noncausal correlations.We propose CASCADE-Net,a novel end-to-end pipeline for robust and interpretable MCI-to-AD progression prediction.Our architecture introduces a Dynamic Temporal Alignment Module that employs a Neural Ordinary Differential Equation(Neural ODE)to model the continuous,underlying progression of pathology from irregularly sampled scans,effectively mapping heterogeneous patient data to a unified latent timeline.This aligned,noise-reduced spatio-temporal data is then processed by a predictive model featuring a novel Causal Spatial Attention mechanism.This mechanism not only identifies the critical brain regions and their evolution predictive of conversion but also incorporates a counterfactual constraint during training.This constraint ensures the learned features are causally linked to AD pathology by encouraging invariance to non-causal,confounder-based changes.Extensive experiments on the Alzheimer’s Disease Neuroimaging Initiative(ADNI)dataset demonstrate that CASCADE-Net significantly outperforms state-of-the-art sequential models in prognostic accuracy.Furthermore,our model provides highly interpretable,causally-grounded attention maps,offering valuable insights into the disease progression process and fostering greater clinical trust.
基金supported by the Sichuan Natural Science Foundation Outstanding Youth Science Foundation(No.2024NSFJQ0053)the National Natural Science Foundation of China(No.82370235)+1 种基金the Tianfu Qingcheng Plan(No.1711)the K-funding of West China Second University Hospital Sichuan University(No.KZ197).
文摘To the Editor:Artificial intelligence(AI)is revolutionizing the biomedical field by enabling advanced data analysis,predictive modeling,and personalized medicine,driving breakthroughs in diagnosis,treatment,and drug discovery.In pursuit of this goal,researchers are attempting to develop AI-based algorithms and establish models for use in clinical settings.Key challenges in this pursuit include ensuring the models’accuracy and consistency and addressing issues such as the interpretability of AI decisions,integration into existing clinical workflows,and ethical considerations like data privacy.Additionally,the AI model lies in the quality and diversity of training data—robust models require diverse and representative datasets to ensure generalizability across different patient populations,reduce dependence on extensive labeled data,and remain resilient to domain shifts,enabling adaptation to new and unseen cases.Nevertheless,this field continues to grow,especially in image-based AI models for diagnosing diseases,such as cardiovascular diseases.
文摘With the advances in artificial intelligence(AI),data‐driven algorithms are becoming increasingly popular in the medical domain.However,due to the nonlinear and complex behavior of many of these algorithms,decision‐making by such algorithms is not trustworthy for clinicians and is considered a blackbox process.Hence,the scientific community has introduced explainable artificial intelligence(XAI)to remedy the problem.This systematic scoping review investigates the application of XAI in breast cancer detection and risk prediction.We conducted a comprehensive search on Scopus,IEEE Explore,PubMed,and Google Scholar(first 50 citations)using a systematic search strategy.The search spanned from January 2017 to July 2023,focusing on peer‐reviewed studies implementing XAI methods in breast cancer datasets.Thirty studies met our inclusion criteria and were included in the analysis.The results revealed that SHapley Additive exPlanations(SHAP)is the top model‐agnostic XAI technique in breast cancer research in terms of usage,explaining the model prediction results,diagnosis and classification of biomarkers,and prognosis and survival analysis.Additionally,the SHAP model primarily explained tree‐based ensemble machine learning models.The most common reason is that SHAP is model agnostic,which makes it both popular and useful for explaining any model prediction.Additionally,it is relatively easy to implement effectively and completely suits performant models,such as tree‐based models.Explainable AI improves the transparency,interpretability,fairness,and trustworthiness of AI‐enabled health systems and medical devices and,ultimately,the quality of care and outcomes.
基金supported by National Key R&D Program of China(Grant No.2023YFE0108600)Natural Science Foundation of China(Grant No.51806190)+2 种基金supported by National Key R&D Program of China(Grant No.2022YFB3304502)supported by A Project Supported by Scientific Research Fund of Zhejiang University(Grant No.XY2024018)the self-directed proj-ect of State Key Laboratory of Clean Energy Utilization.
文摘Integrated energy system plays a crucial role in global carbon neutrality.Accurate dynamic modeling is essential for optimizing integrated energy system,requiring concurrent modeling of network topology and multi-energy flow dynamics.Existing dynamic modeling approaches often struggle to solve dynamic characteristics with differential-algebraic coupling forms.With the rapid advancements in AI technologies,the integration of AI with energy systems has become not only a promising avenue but also a critical necessity for modeling the modern energy networks.This study innovatively integrates graph neural networks with physical principles,proposing an interpretable neural network methodology.The proposed energy-adapted graph to sequence model(EnG2S)represents a significant advancement for energy systems,pioneering the embedding of fluid dynamics theory to systematically reveal intrinsic connections between multi-energy flow dynamics and neural network charac-teristics.Overall,this study sets up a new paradigm for energy system modeling,broadening the boundaries of the integration between AI and energy systems.