Dear Editor,I am writing in response to Jamil's letter,"Interpretative Challenges of the Missing Perilymph'Sign in PLF Diagnosis."I concur with the author's emphasis on the necessity for cautious...Dear Editor,I am writing in response to Jamil's letter,"Interpretative Challenges of the Missing Perilymph'Sign in PLF Diagnosis."I concur with the author's emphasis on the necessity for cautious interpretation of low-signal areas as evidence of active perilymph leakage,requiring correlation with clinical findings,surgical confirmation,and longitudinal imaging changes.展开更多
For ecological restoration and reconstruction of the degraded area, it is an important premise to correctly understand the degradation factors of the ecosystem in the arid-hot valleys. The factors including vegetation...For ecological restoration and reconstruction of the degraded area, it is an important premise to correctly understand the degradation factors of the ecosystem in the arid-hot valleys. The factors including vegetation degradation, land degradation, arid climate, policy failure, forest fire, rapid population growth, excessive deforestation, overgrazing, steep slope reclamation, economic poverty, engineering construction, lithology, slope, low cultural level, geological hazards, biological disaster, soil properties etc, were selected to study the Yuanmou arid-hot valleys. Based on the interpretative structural model (ISM), it has found out that the degradation factors of the Yuanmou arid-hot valleys were not at the same level but in a multilevel hierarchical system with internal relations, which pointed out that the degradation mode of the arid-hot valleys was "straight (appearance)-penetrating-background". Such researches have important directive significance for the restoration and reconstruction of the arid-hot valleys ecosystem.展开更多
Nonlinear characteristic fault detection and diagnosis method based on higher-order statistical(HOS) is an effective data-driven method, but the calculation costs much for a large-scale process control system. An HOS-...Nonlinear characteristic fault detection and diagnosis method based on higher-order statistical(HOS) is an effective data-driven method, but the calculation costs much for a large-scale process control system. An HOS-ISM fault diagnosis framework combining interpretative structural model(ISM) and HOS is proposed:(1) the adjacency matrix is determined by partial correlation coefficient;(2) the modified adjacency matrix is defined by directed graph with prior knowledge of process piping and instrument diagram;(3) interpretative structural for large-scale process control system is built by this ISM method; and(4) non-Gaussianity index, nonlinearity index, and total nonlinearity index are calculated dynamically based on interpretative structural to effectively eliminate uncertainty of the nonlinear characteristic diagnostic method with reasonable sampling period and data window. The proposed HOS-ISM fault diagnosis framework is verified by the Tennessee Eastman process and presents improvement for highly non-linear characteristic for selected fault cases.展开更多
Objective: This study aimed to explore the experiences of women in the process of formula feeding their infants. The World Health Organization has emphasized the importance of breastfeeding for infant health. After de...Objective: This study aimed to explore the experiences of women in the process of formula feeding their infants. The World Health Organization has emphasized the importance of breastfeeding for infant health. After decades of breastfeeding promotions,breastfeeding rates in Hong Kong have been rising consistently; however, the low continuation rate is alarming. This study explores women's experiences with formula feeding their infants, including factors affecting their decision to do so.Methods: A qualitative approach using an interpretative phenomenological analysis(IPA) was adopted as the study design. Data were collected from 2014 to 2015 through individual in-depth unstructured interviews with 16 women, conducted between 3 and 12 months after the birth of their infant. Data were analyzed using IPA.Results: Three main themes emerged as follows:(1) self-struggle, with the subthemes of feeling like a milk cow and feeling trapped;(2) family conflict, with the subtheme of sharing the spotlight; and(3) interpersonal tensions, with the subthemes of embarrassment,staring, and innocence. Many mothers suffered various stressors and frustrations during breastfeeding. These findings suggest a number of pertinent areas that need to be considered in preparing an infant feeding campaign.Conclusions: The findings of this study reinforce our knowledge of women's struggles with multiple sources of pressure, such as career demands, childcare demands, and family life after giving birth. All mothers should be given assistance in making informed decisions about the optimal approach to feeding their babies given their individual situation and be provided with support to pursue their chosen feeding method.展开更多
Background: Based on the experience of hospital nurses, the aim of this study is to explore the phenomenon of how work-engaged nurses stay healthy in relationally demanding jobs involving very sick and/or dying patien...Background: Based on the experience of hospital nurses, the aim of this study is to explore the phenomenon of how work-engaged nurses stay healthy in relationally demanding jobs involving very sick and/or dying patients. Method: In-depth interviews were conducted with ten work-engaged nurses employed at the main hospital in one region in Norway. The interviews were interpreted using the Interpretative Phenomenological Analysis method (IPA). Results: The results indicate the importance of using the personal resources: authenticity and a sense of humour for staying healthy. The nurses’ authenticity, in the sense of having a strong sense of ownership towards their personal life experiences, and a sense of having a meaningful life in line with their own values and interests, was an important element when they considered their own health to be good in spite of repetitive strain injuries and perceived stress. These personal resources seem to be positively related to their well-being and work engagement, which serves as an argument for including them among other personal resources, often conceptualized in terms of Psychological Capital (PsyCap). The results also showed that the nurses worked actively and intentionally with conditions that could contribute to safeguarding their own health. Conclusion: The results indicated the importance of stimulating the nurses’ area of knowledge about caring for themselves in order to enable them to maintain good physical and mental health. A focus on self-care should be part of the agenda as early as during nursing education.展开更多
Interpretative structural model(ISM) can transform a multivariate problem into several sub-variable problems to analyze a complex industrial structure in a more efficient way by building a multi-level hierarchical str...Interpretative structural model(ISM) can transform a multivariate problem into several sub-variable problems to analyze a complex industrial structure in a more efficient way by building a multi-level hierarchical structure model. To build an ISM of a production system, the partial correlation coefficient method is proposed to obtain the adjacency matrix, which can be transformed to ISM. According to estimation of correlation coefficient, the result can give actual variable correlations and eliminate effects of intermediate variables. Furthermore, this paper proposes an effective approach using ISM to analyze the main factors and basic mechanisms that affect the energy consumption in an ethylene production system. The case study shows that the proposed energy consumption analysis method is valid and efficient in improvement of energy efficiency in ethylene production.展开更多
Specific energy(SE)is an important index to measure crushing efficiency in mechanized tunnel excavation.Accurate prediction of the SE of tunnel boring machine disc cutters is important for optimizing the crushing proc...Specific energy(SE)is an important index to measure crushing efficiency in mechanized tunnel excavation.Accurate prediction of the SE of tunnel boring machine disc cutters is important for optimizing the crushing process,reducing energy consumption,and minimizing machine wear.Therefore,in this paper,the sparrow search algorithm(SSA),combined with six chaotic mapping strategies,is utilized to optimize the random forest(RF)model for predicting SE,referred to as the COSSA-RF prediction models.For this purpose,an SE prediction database was established for training and validating model performance,encompassing 160 sets of experimental data,each with six input parameters:uniaxial compressive strength(UCS),Brazilian tensile strength(BTS),disc cutter diameter(D),cutter tip width(T),cutter spacing(S),and cutter penetration depth(P),along with a target parameter,SE.The evaluation results indicate that the COSSA-RF models demonstrate superior performance compared to other four machine learning models.In particular,the Chebyshev map-SSA-RF(CHSSA-RF)model achieves the most satisfactory prediction accuracy among all models,resulting in the highest coefficient of determination R2 and dynamic variance-weighted global performance indicator values(0.9756 and 0.0814)and the lowest values of root mean square error(RMSE),mean absolute error(MAE),and mean absolute percentage error(MAPE)(6.4742,4.0003,and 20.41%).Lastly,the results of interpretability analysis of the best model through SHapley Additive exPlanations,local interpretable model-agnostic explanations,and Vivid methods show that the importance of input parameters ranked as follows:UCS,BTS,P,S,T,and D.Moreover,interactions between parameters(UCS and BTS,BTS and P,and BTS and S)significantly influence the model predictions.展开更多
Intermodal travel is considered as an effective method for achieving sustainable urban transportation.Understanding the factors influencing intermodal travel is crucial.Due to the relatively small proportion of interm...Intermodal travel is considered as an effective method for achieving sustainable urban transportation.Understanding the factors influencing intermodal travel is crucial.Due to the relatively small proportion of intermodal trips within cities,datasets are significantly imbalanced,leading to poor performance of traditional logit models.In this paper,we develop a novel interpretable ensemble learning(IEL)model to identify key factors through voting five types of machine learning(ML)models.We test our model on two datasets with different numbers of features.The results show that travel duration,travel distance,vehicle ownership,and distance to the nearest metro station are the key factors influencing intermodal travel,cumulatively contributing nearly 70%in the JDS2021 dataset with 14 features and nearly 80%in the SHS2019 dataset with 8 features.Furthermore,we analyze the interpretability of our model,and compare it with the logit model.Our model enriches the methodology for modeling intermodal travel behavior.展开更多
In recent years,significant advances have been achieved in liver cancer management with the development of artificial intelligence(AI).AI-based pathological analysis can extract crucial information from whole slide im...In recent years,significant advances have been achieved in liver cancer management with the development of artificial intelligence(AI).AI-based pathological analysis can extract crucial information from whole slide images to assist clinicians in all aspects from diagnosis to prognosis and molecular profiling.However,AI techniques have a“black box”nature,which means that interpretability is of utmost importance because it is key to ensuring the reliability of the methods and building trust among clinicians for actual clinical implementation.In this paper,we provide an overview of current technical advancements in the AI-based pathological analysis of liver cancer,and delve into the strategies used in recent studies to unravel the“black box”of AI's decision-making process.展开更多
This article aims to argue that interpreting liangzhi 良知 as innate, original, or cognitive knowledge is likely to fall into "interpretative obfuscation regarding knowledge." First, for Wang, what is inherent in ma...This article aims to argue that interpreting liangzhi 良知 as innate, original, or cognitive knowledge is likely to fall into "interpretative obfuscation regarding knowledge." First, for Wang, what is inherent in mankind is moral agency rather than innate or original knowledge. Therefore, the focus ofzhizhi 致知 and gewu 格物 is instead on moral practice and actualization of virtue rather than on either "the extension of knowledge" or "the investigation of things." Apart from that, drawing support from cognitive knowledge to explicate liangzhi also leads to three related but distinct misconceptions: liangzhi as perfect knowledge, the identity of knowledge and action, and liangzhi as recognition or acknowledgement. By clarifying the above misinterpretations, the meaning and implication of liangzhi will, in turn, also become clearer.展开更多
The integration of machine learning(ML)into geohazard assessment has successfully instigated a paradigm shift,leading to the production of models that possess a level of predictive accuracy previously considered unatt...The integration of machine learning(ML)into geohazard assessment has successfully instigated a paradigm shift,leading to the production of models that possess a level of predictive accuracy previously considered unattainable.However,the black-box nature of these systems presents a significant barrier,hindering their operational adoption,regulatory approval,and full scientific validation.This paper provides a systematic review and synthesis of the emerging field of explainable artificial intelligence(XAI)as applied to geohazard science(GeoXAI),a domain that aims to resolve the long-standing trade-off between model performance and interpretability.A rigorous synthesis of 87 foundational studies is used to map the intellectual and methodological contours of this rapidly expanding field.The analysis reveals that current research efforts are concentrated predominantly on landslide and flood assessment.Methodologically,tree-based ensembles and deep learning models dominate the literature,with SHapley Additive exPlanations(SHAP)frequently adopted as the principal post-hoc explanation technique.More importantly,the review further documents how the role of XAI has shifted:rather than being used solely as a tool for interpreting models after training,it is increasingly integrated into the modeling cycle itself.Recent applications include its use in feature selection,adaptive sampling strategies,and model evaluation.The evidence also shows that GeoXAI extends beyond producing feature rankings.It reveals nonlinear thresholds and interaction effects that generate deeper mechanistic insights into hazard processes and mechanisms.Nevertheless,several key challenges remain unresolved within the field.These persistent issues are especially pronounced when considering the crucial necessity for interpretation stability,the demanding scholarly task of reliably distinguishing correlation from causation,and the development of appropriate methods for the treatment of complex spatio-temporal dynamics.展开更多
Landslide susceptibility mapping(LSM)is an essential tool for mitigating the escalating global risk of landslides.However,challenges such as the heterogeneity of different landslide triggers,extensive engineering acti...Landslide susceptibility mapping(LSM)is an essential tool for mitigating the escalating global risk of landslides.However,challenges such as the heterogeneity of different landslide triggers,extensive engineering activities exacerbated reactivation,and the interpretability of data-driven models have hindered the practical application of LSM.This work proposes a novel framework for enhancing LSM considering different triggers for accumulation and rock landslides,leveraging interpretable machine learning and Multi-temporal Interferometric Synthetic Aperture Radar(MT-InSAR)technology.Initially,a refined fieldinvestigation was conducted to delineate the accumulation and rock area according to landslide types,leading to the identificationof relevant contributing factors.Deformation along the slope was then combined with time-series analysis to derive a landslide activity level(AL)index to recognize the likelihood of reactivation or dormancy.The SHapley Additive exPlanation(SHAP)technique facilitated the interpretation of factors and the identificationof determinants in high susceptibility areas.The results indicate that random forest(RF)outperformed other models in both accumulation and rock areas.Key factors including thickness and weak intercalation were identifiedfor accumulation and rock landslides.The introduction of AL substantially enhanced the predictive capability of the LSM and outperformed models that neglect movement trends or deformation rates with an average ratio of 81.23%in high susceptibility zones.Besides,the fieldvalidation confirmedthat 83.8%of newly identifiedlandslides were correctly upgraded.Given its efficiencyand operational simplicity,the proposed hybrid model opens new avenues for the feasibility of enhancement in LSM at urban settlements worldwide.展开更多
Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learni...Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learning(DL)approaches often face several limitations,including inefficient feature extraction,class imbalance,suboptimal classification performance,and limited interpretability,which collectively hinder their deployment in clinical settings.To address these challenges,we propose a novel DL framework for heart disease prediction that integrates a comprehensive preprocessing pipeline with an advanced classification architecture.The preprocessing stage involves label encoding and feature scaling.To address the issue of class imbalance inherent in the personal key indicators of the heart disease dataset,the localized random affine shadowsampling technique is employed,which enhances minority class representation while minimizing overfitting.At the core of the framework lies the Deep Residual Network(DeepResNet),which employs hierarchical residual transformations to facilitate efficient feature extraction and capture complex,non-linear relationships in the data.Experimental results demonstrate that the proposed model significantly outperforms existing techniques,achieving improvements of 3.26%in accuracy,3.16%in area under the receiver operating characteristics,1.09%in recall,and 1.07%in F1-score.Furthermore,robustness is validated using 10-fold crossvalidation,confirming the model’s generalizability across diverse data distributions.Moreover,model interpretability is ensured through the integration of Shapley additive explanations and local interpretable model-agnostic explanations,offering valuable insights into the contribution of individual features to model predictions.Overall,the proposed DL framework presents a robust,interpretable,and clinically applicable solution for heart disease prediction.展开更多
Geological prospecting and the identification of adverse geological features are essential in tunnel construction,providing critical information to ensure safety and guide engineering decisions.As tunnel projects exte...Geological prospecting and the identification of adverse geological features are essential in tunnel construction,providing critical information to ensure safety and guide engineering decisions.As tunnel projects extend into deeper and more mountainous terrains,engineers face increasingly complex geological conditions,including high water pressure,intense geo-stress,elevated geothermal gradients,and active fault zones.These conditions pose substantial risks such as high-pressure water inrush,largescale collapses,and tunnel boring machine(TBM)blockages.Addressing these challenges requires advanced detection technologies capable of long-distance,high-precision,and intelligent assessments of adverse geology.This paper presents a comprehensive review of recent advancements in tunnel geological ahead prospecting methods.It summarizes the fundamental principles,technical maturity,key challenges,development trends,and real-world applications of various detection techniques.Airborne and semi-airborne geophysical methods enable large-scale reconnaissance for initial surveys in complex terrain.Tunnel-and borehole-based approaches offer high-resolution detection during excavation,including seismic ahead prospecting(SAP),TBM rock-breaking source seismic methods,fulltime-domain tunnel induced polarization(TIP),borehole electrical resistivity,and ground penetrating radar(GPR).To address scenarios involving multiple,coexisting adverse geologies,intelligent inversion and geological identification methods have been developed based on multi-source data fusion and artificial intelligence(AI)techniques.Overall,these advances significantly improve detection range,resolution,and geological characterization capabilities.The methods demonstrate strong adaptability to complex environments and provide reliable subsurface information,supporting safer and more efficient tunnel construction.展开更多
Artificial Intelligence(AI)is changing healthcare by helping with diagnosis.However,for doctors to trust AI tools,they need to be both accurate and easy to understand.In this study,we created a new machine learning sy...Artificial Intelligence(AI)is changing healthcare by helping with diagnosis.However,for doctors to trust AI tools,they need to be both accurate and easy to understand.In this study,we created a new machine learning system for the early detection of Autism Spectrum Disorder(ASD)in children.Our main goal was to build a model that is not only good at predicting ASD but also clear in its reasoning.For this,we combined several different models,including Random Forest,XGBoost,and Neural Networks,into a single,more powerful framework.We used two different types of datasets:(i)a standard behavioral dataset and(ii)a more complex multimodal dataset with images,audio,and physiological information.The datasets were carefully preprocessed for missing values,redundant features,and dataset imbalance to ensure fair learning.The results outperformed the state-of-the-art with a Regularized Neural Network,achieving 97.6%accuracy on behavioral data.Whereas,on the multimodal data,the accuracy is 98.2%.Other models also did well with accuracies consistently above 96%.We also used SHAP and LIME on a behavioral dataset for models’explainability.展开更多
Environmentalmonitoring systems based on remote sensing technology have a wider monitoringrange and longer timeliness, which makes them widely used in the detection andmanagement of pollution sources. However, haze we...Environmentalmonitoring systems based on remote sensing technology have a wider monitoringrange and longer timeliness, which makes them widely used in the detection andmanagement of pollution sources. However, haze weather conditions degrade image qualityand reduce the precision of environmental monitoring systems. To address this problem,this research proposes a remote sensing image dehazingmethod based on the atmosphericscattering model and a dark channel prior constrained network. The method consists ofa dehazing network, a dark channel information injection network (DCIIN), and a transmissionmap network. Within the dehazing network, the branch fusion module optimizesfeature weights to enhance the dehazing effect. By leveraging dark channel information,the DCIIN enables high-quality estimation of the atmospheric veil. To ensure the outputof the deep learning model aligns with physical laws, we reconstruct the haze image usingthe prediction results from the three networks. Subsequently, we apply the traditionalloss function and dark channel loss function between the reconstructed haze image and theoriginal haze image. This approach enhances interpretability and reliabilitywhile maintainingadherence to physical principles. Furthermore, the network is trained on a synthesizednon-homogeneous haze remote sensing dataset using dark channel information from cloudmaps. The experimental results show that the proposed network can achieve better imagedehazing on both synthetic and real remote sensing images with non-homogeneous hazedistribution. This research provides a new idea for solving the problem of decreased accuracyof environmental monitoring systems under haze weather conditions and has strongpracticability.展开更多
BACKGROUND To investigate the preoperative factors influencing textbook outcomes(TO)in Intrahepatic cholangiocarcinoma(ICC)patients and evaluate the feasibility of an interpretable machine learning model for preoperat...BACKGROUND To investigate the preoperative factors influencing textbook outcomes(TO)in Intrahepatic cholangiocarcinoma(ICC)patients and evaluate the feasibility of an interpretable machine learning model for preoperative prediction of TO,we developed a machine learning model for preoperative prediction of TO and used the SHapley Additive exPlanations(SHAP)technique to illustrate the prediction process.AIM To analyze the factors influencing textbook outcomes before surgery and to establish interpretable machine learning models for preoperative prediction.METHODS A total of 376 patients diagnosed with ICC were retrospectively collected from four major medical institutions in China,covering the period from 2011 to 2017.Logistic regression analysis was conducted to identify preoperative variables associated with achieving TO.Based on these variables,an EXtreme Gradient Boosting(XGBoost)machine learning prediction model was constructed using the XGBoost package.The SHAP(package:Shapviz)algorithm was employed to visualize each variable's contribution to the model's predictions.Kaplan-Meier survival analysis was performed to compare the prognostic differences between the TO-achieving and non-TO-achieving groups.RESULTS Among 376 patients,287 were included in the training group and 89 in the validation group.Logistic regression identified the following preoperative variables influencing TO:Child-Pugh classification,Eastern Cooperative Oncology Group(ECOG)score,hepatitis B,and tumor size.The XGBoost prediction model demonstrated high accuracy in internal validation(AUC=0.8825)and external validation(AUC=0.8346).Survival analysis revealed that the disease-free survival rates for patients achieving TO at 1,2,and 3 years were 64.2%,56.8%,and 43.4%,respectively.CONCLUSION Child-Pugh classification,ECOG score,hepatitis B,and tumor size are preoperative predictors of TO.In both the training group and the validation group,the machine learning model had certain effectiveness in predicting TO before surgery.The SHAP algorithm provided intuitive visualization of the machine learning prediction process,enhancing its interpretability.展开更多
Artificial intelligence(AI)has emerged as a transformative technology in accelerating drug discovery and development within natural medicines research.Natural medicines,characterized by their complex chemical composit...Artificial intelligence(AI)has emerged as a transformative technology in accelerating drug discovery and development within natural medicines research.Natural medicines,characterized by their complex chemical compositions and multifaceted pharmacological mechanisms,demonstrate widespread application in treating diverse diseases.However,research and development face significant challenges,including component complexity,extraction difficulties,and efficacy validation.AI technology,particularly through deep learning(DL)and machine learning(ML)approaches,enables efficient analysis of extensive datasets,facilitating drug screening,component analysis,and pharmacological mechanism elucidation.The implementation of AI technology demonstrates considerable potential in virtual screening,compound optimization,and synthetic pathway design,thereby enhancing natural medicines’bioavailability and safety profiles.Nevertheless,current applications encounter limitations regarding data quality,model interpretability,and ethical considerations.As AI technologies continue to evolve,natural medicines research and development will achieve greater efficiency and precision,advancing both personalized medicine and contemporary drug development approaches.展开更多
Despite significant progress in the Prognostics and Health Management(PHM)domain using pattern learning systems from data,machine learning(ML)still faces challenges related to limited generalization and weak interpret...Despite significant progress in the Prognostics and Health Management(PHM)domain using pattern learning systems from data,machine learning(ML)still faces challenges related to limited generalization and weak interpretability.A promising approach to overcoming these challenges is to embed domain knowledge into the ML pipeline,enhancing the model with additional pattern information.In this paper,we review the latest developments in PHM,encapsulated under the concept of Knowledge Driven Machine Learning(KDML).We propose a hierarchical framework to define KDML in PHM,which includes scientific paradigms,knowledge sources,knowledge representations,and knowledge embedding methods.Using this framework,we examine current research to demonstrate how various forms of knowledge can be integrated into the ML pipeline and provide roadmap to specific usage.Furthermore,we present several case studies that illustrate specific implementations of KDML in the PHM domain,including inductive experience,physical model,and signal processing.We analyze the improvements in generalization capability and interpretability that KDML can achieve.Finally,we discuss the challenges,potential applications,and usage recommendations of KDML in PHM,with a particular focus on the critical need for interpretability to ensure trustworthy deployment of artificial intelligence in PHM.展开更多
文摘Dear Editor,I am writing in response to Jamil's letter,"Interpretative Challenges of the Missing Perilymph'Sign in PLF Diagnosis."I concur with the author's emphasis on the necessity for cautious interpretation of low-signal areas as evidence of active perilymph leakage,requiring correlation with clinical findings,surgical confirmation,and longitudinal imaging changes.
基金the National Basic Research Program of China (973 Program) ( 2007CB407206)the National Key Technologies Research and Develop-ment Program in the Eleventh Five-Year Plan of China (2006BAC01A11)
文摘For ecological restoration and reconstruction of the degraded area, it is an important premise to correctly understand the degradation factors of the ecosystem in the arid-hot valleys. The factors including vegetation degradation, land degradation, arid climate, policy failure, forest fire, rapid population growth, excessive deforestation, overgrazing, steep slope reclamation, economic poverty, engineering construction, lithology, slope, low cultural level, geological hazards, biological disaster, soil properties etc, were selected to study the Yuanmou arid-hot valleys. Based on the interpretative structural model (ISM), it has found out that the degradation factors of the Yuanmou arid-hot valleys were not at the same level but in a multilevel hierarchical system with internal relations, which pointed out that the degradation mode of the arid-hot valleys was "straight (appearance)-penetrating-background". Such researches have important directive significance for the restoration and reconstruction of the arid-hot valleys ecosystem.
基金Supported by the National Natural Science Foundation of China(61374166)the Doctoral Fund of Ministry of Education of China(20120010110010)the Natural Science Fund of Ningbo(2012A610001)
文摘Nonlinear characteristic fault detection and diagnosis method based on higher-order statistical(HOS) is an effective data-driven method, but the calculation costs much for a large-scale process control system. An HOS-ISM fault diagnosis framework combining interpretative structural model(ISM) and HOS is proposed:(1) the adjacency matrix is determined by partial correlation coefficient;(2) the modified adjacency matrix is defined by directed graph with prior knowledge of process piping and instrument diagram;(3) interpretative structural for large-scale process control system is built by this ISM method; and(4) non-Gaussianity index, nonlinearity index, and total nonlinearity index are calculated dynamically based on interpretative structural to effectively eliminate uncertainty of the nonlinear characteristic diagnostic method with reasonable sampling period and data window. The proposed HOS-ISM fault diagnosis framework is verified by the Tennessee Eastman process and presents improvement for highly non-linear characteristic for selected fault cases.
文摘Objective: This study aimed to explore the experiences of women in the process of formula feeding their infants. The World Health Organization has emphasized the importance of breastfeeding for infant health. After decades of breastfeeding promotions,breastfeeding rates in Hong Kong have been rising consistently; however, the low continuation rate is alarming. This study explores women's experiences with formula feeding their infants, including factors affecting their decision to do so.Methods: A qualitative approach using an interpretative phenomenological analysis(IPA) was adopted as the study design. Data were collected from 2014 to 2015 through individual in-depth unstructured interviews with 16 women, conducted between 3 and 12 months after the birth of their infant. Data were analyzed using IPA.Results: Three main themes emerged as follows:(1) self-struggle, with the subthemes of feeling like a milk cow and feeling trapped;(2) family conflict, with the subtheme of sharing the spotlight; and(3) interpersonal tensions, with the subthemes of embarrassment,staring, and innocence. Many mothers suffered various stressors and frustrations during breastfeeding. These findings suggest a number of pertinent areas that need to be considered in preparing an infant feeding campaign.Conclusions: The findings of this study reinforce our knowledge of women's struggles with multiple sources of pressure, such as career demands, childcare demands, and family life after giving birth. All mothers should be given assistance in making informed decisions about the optimal approach to feeding their babies given their individual situation and be provided with support to pursue their chosen feeding method.
文摘Background: Based on the experience of hospital nurses, the aim of this study is to explore the phenomenon of how work-engaged nurses stay healthy in relationally demanding jobs involving very sick and/or dying patients. Method: In-depth interviews were conducted with ten work-engaged nurses employed at the main hospital in one region in Norway. The interviews were interpreted using the Interpretative Phenomenological Analysis method (IPA). Results: The results indicate the importance of using the personal resources: authenticity and a sense of humour for staying healthy. The nurses’ authenticity, in the sense of having a strong sense of ownership towards their personal life experiences, and a sense of having a meaningful life in line with their own values and interests, was an important element when they considered their own health to be good in spite of repetitive strain injuries and perceived stress. These personal resources seem to be positively related to their well-being and work engagement, which serves as an argument for including them among other personal resources, often conceptualized in terms of Psychological Capital (PsyCap). The results also showed that the nurses worked actively and intentionally with conditions that could contribute to safeguarding their own health. Conclusion: The results indicated the importance of stimulating the nurses’ area of knowledge about caring for themselves in order to enable them to maintain good physical and mental health. A focus on self-care should be part of the agenda as early as during nursing education.
基金Supported by the National Natural Science Foundation of China(61374166,6153303)the Doctoral Fund of Ministry of Education of China(20120010110010)the Fundamental Research Funds for the Central Universities(YS1404,JD1413,ZY1502)
文摘Interpretative structural model(ISM) can transform a multivariate problem into several sub-variable problems to analyze a complex industrial structure in a more efficient way by building a multi-level hierarchical structure model. To build an ISM of a production system, the partial correlation coefficient method is proposed to obtain the adjacency matrix, which can be transformed to ISM. According to estimation of correlation coefficient, the result can give actual variable correlations and eliminate effects of intermediate variables. Furthermore, this paper proposes an effective approach using ISM to analyze the main factors and basic mechanisms that affect the energy consumption in an ethylene production system. The case study shows that the proposed energy consumption analysis method is valid and efficient in improvement of energy efficiency in ethylene production.
基金supported by the National Natural Science Foundation of China(Grant Nos.52474121 and 42177164)the Outstanding Youth Project of Hunan Provincial Department of Education(Project No.23B0008)the Distinguished Youth Science Foundation of Hunan Province of China(Grant No.2022JJ10073).
文摘Specific energy(SE)is an important index to measure crushing efficiency in mechanized tunnel excavation.Accurate prediction of the SE of tunnel boring machine disc cutters is important for optimizing the crushing process,reducing energy consumption,and minimizing machine wear.Therefore,in this paper,the sparrow search algorithm(SSA),combined with six chaotic mapping strategies,is utilized to optimize the random forest(RF)model for predicting SE,referred to as the COSSA-RF prediction models.For this purpose,an SE prediction database was established for training and validating model performance,encompassing 160 sets of experimental data,each with six input parameters:uniaxial compressive strength(UCS),Brazilian tensile strength(BTS),disc cutter diameter(D),cutter tip width(T),cutter spacing(S),and cutter penetration depth(P),along with a target parameter,SE.The evaluation results indicate that the COSSA-RF models demonstrate superior performance compared to other four machine learning models.In particular,the Chebyshev map-SSA-RF(CHSSA-RF)model achieves the most satisfactory prediction accuracy among all models,resulting in the highest coefficient of determination R2 and dynamic variance-weighted global performance indicator values(0.9756 and 0.0814)and the lowest values of root mean square error(RMSE),mean absolute error(MAE),and mean absolute percentage error(MAPE)(6.4742,4.0003,and 20.41%).Lastly,the results of interpretability analysis of the best model through SHapley Additive exPlanations,local interpretable model-agnostic explanations,and Vivid methods show that the importance of input parameters ranked as follows:UCS,BTS,P,S,T,and D.Moreover,interactions between parameters(UCS and BTS,BTS and P,and BTS and S)significantly influence the model predictions.
基金supported by the National Natural Science Foundation of China(No.52172320)the Fundamental Research Funds for the Central Universities(Nos.2023-4-ZD-01 and 22120210542).
文摘Intermodal travel is considered as an effective method for achieving sustainable urban transportation.Understanding the factors influencing intermodal travel is crucial.Due to the relatively small proportion of intermodal trips within cities,datasets are significantly imbalanced,leading to poor performance of traditional logit models.In this paper,we develop a novel interpretable ensemble learning(IEL)model to identify key factors through voting five types of machine learning(ML)models.We test our model on two datasets with different numbers of features.The results show that travel duration,travel distance,vehicle ownership,and distance to the nearest metro station are the key factors influencing intermodal travel,cumulatively contributing nearly 70%in the JDS2021 dataset with 14 features and nearly 80%in the SHS2019 dataset with 8 features.Furthermore,we analyze the interpretability of our model,and compare it with the logit model.Our model enriches the methodology for modeling intermodal travel behavior.
基金supported by the National Natural Science Foundation of China(Nos 81961128025 and 82273187)the Research Projects from the Science and Technology Commission of Shanghai Municipality(Nos 21JC1401200 and 20JC1418900)the Natural Science Foundation of Fujian Province(No.2023J05292).
文摘In recent years,significant advances have been achieved in liver cancer management with the development of artificial intelligence(AI).AI-based pathological analysis can extract crucial information from whole slide images to assist clinicians in all aspects from diagnosis to prognosis and molecular profiling.However,AI techniques have a“black box”nature,which means that interpretability is of utmost importance because it is key to ensuring the reliability of the methods and building trust among clinicians for actual clinical implementation.In this paper,we provide an overview of current technical advancements in the AI-based pathological analysis of liver cancer,and delve into the strategies used in recent studies to unravel the“black box”of AI's decision-making process.
文摘This article aims to argue that interpreting liangzhi 良知 as innate, original, or cognitive knowledge is likely to fall into "interpretative obfuscation regarding knowledge." First, for Wang, what is inherent in mankind is moral agency rather than innate or original knowledge. Therefore, the focus ofzhizhi 致知 and gewu 格物 is instead on moral practice and actualization of virtue rather than on either "the extension of knowledge" or "the investigation of things." Apart from that, drawing support from cognitive knowledge to explicate liangzhi also leads to three related but distinct misconceptions: liangzhi as perfect knowledge, the identity of knowledge and action, and liangzhi as recognition or acknowledgement. By clarifying the above misinterpretations, the meaning and implication of liangzhi will, in turn, also become clearer.
文摘The integration of machine learning(ML)into geohazard assessment has successfully instigated a paradigm shift,leading to the production of models that possess a level of predictive accuracy previously considered unattainable.However,the black-box nature of these systems presents a significant barrier,hindering their operational adoption,regulatory approval,and full scientific validation.This paper provides a systematic review and synthesis of the emerging field of explainable artificial intelligence(XAI)as applied to geohazard science(GeoXAI),a domain that aims to resolve the long-standing trade-off between model performance and interpretability.A rigorous synthesis of 87 foundational studies is used to map the intellectual and methodological contours of this rapidly expanding field.The analysis reveals that current research efforts are concentrated predominantly on landslide and flood assessment.Methodologically,tree-based ensembles and deep learning models dominate the literature,with SHapley Additive exPlanations(SHAP)frequently adopted as the principal post-hoc explanation technique.More importantly,the review further documents how the role of XAI has shifted:rather than being used solely as a tool for interpreting models after training,it is increasingly integrated into the modeling cycle itself.Recent applications include its use in feature selection,adaptive sampling strategies,and model evaluation.The evidence also shows that GeoXAI extends beyond producing feature rankings.It reveals nonlinear thresholds and interaction effects that generate deeper mechanistic insights into hazard processes and mechanisms.Nevertheless,several key challenges remain unresolved within the field.These persistent issues are especially pronounced when considering the crucial necessity for interpretation stability,the demanding scholarly task of reliably distinguishing correlation from causation,and the development of appropriate methods for the treatment of complex spatio-temporal dynamics.
基金supported by the National Key R&D Program of China(Grant No.2023YFC3007201)the National Natural Science Foundation of China(Grant No.42377161)the Opening Fund of Key Laboratory of Geological Survey and Evaluation of Ministry of Education(Grant No.GLAB 2024ZR03).
文摘Landslide susceptibility mapping(LSM)is an essential tool for mitigating the escalating global risk of landslides.However,challenges such as the heterogeneity of different landslide triggers,extensive engineering activities exacerbated reactivation,and the interpretability of data-driven models have hindered the practical application of LSM.This work proposes a novel framework for enhancing LSM considering different triggers for accumulation and rock landslides,leveraging interpretable machine learning and Multi-temporal Interferometric Synthetic Aperture Radar(MT-InSAR)technology.Initially,a refined fieldinvestigation was conducted to delineate the accumulation and rock area according to landslide types,leading to the identificationof relevant contributing factors.Deformation along the slope was then combined with time-series analysis to derive a landslide activity level(AL)index to recognize the likelihood of reactivation or dormancy.The SHapley Additive exPlanation(SHAP)technique facilitated the interpretation of factors and the identificationof determinants in high susceptibility areas.The results indicate that random forest(RF)outperformed other models in both accumulation and rock areas.Key factors including thickness and weak intercalation were identifiedfor accumulation and rock landslides.The introduction of AL substantially enhanced the predictive capability of the LSM and outperformed models that neglect movement trends or deformation rates with an average ratio of 81.23%in high susceptibility zones.Besides,the fieldvalidation confirmedthat 83.8%of newly identifiedlandslides were correctly upgraded.Given its efficiencyand operational simplicity,the proposed hybrid model opens new avenues for the feasibility of enhancement in LSM at urban settlements worldwide.
基金funded by Ongoing Research Funding Program for Project number(ORF-2025-648),King Saud University,Riyadh,Saudi Arabia.
文摘Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learning(DL)approaches often face several limitations,including inefficient feature extraction,class imbalance,suboptimal classification performance,and limited interpretability,which collectively hinder their deployment in clinical settings.To address these challenges,we propose a novel DL framework for heart disease prediction that integrates a comprehensive preprocessing pipeline with an advanced classification architecture.The preprocessing stage involves label encoding and feature scaling.To address the issue of class imbalance inherent in the personal key indicators of the heart disease dataset,the localized random affine shadowsampling technique is employed,which enhances minority class representation while minimizing overfitting.At the core of the framework lies the Deep Residual Network(DeepResNet),which employs hierarchical residual transformations to facilitate efficient feature extraction and capture complex,non-linear relationships in the data.Experimental results demonstrate that the proposed model significantly outperforms existing techniques,achieving improvements of 3.26%in accuracy,3.16%in area under the receiver operating characteristics,1.09%in recall,and 1.07%in F1-score.Furthermore,robustness is validated using 10-fold crossvalidation,confirming the model’s generalizability across diverse data distributions.Moreover,model interpretability is ensured through the integration of Shapley additive explanations and local interpretable model-agnostic explanations,offering valuable insights into the contribution of individual features to model predictions.Overall,the proposed DL framework presents a robust,interpretable,and clinically applicable solution for heart disease prediction.
基金supported by the National Natural Science Foundation of China(Grant Nos.52021005,52325904,and 51991391)。
文摘Geological prospecting and the identification of adverse geological features are essential in tunnel construction,providing critical information to ensure safety and guide engineering decisions.As tunnel projects extend into deeper and more mountainous terrains,engineers face increasingly complex geological conditions,including high water pressure,intense geo-stress,elevated geothermal gradients,and active fault zones.These conditions pose substantial risks such as high-pressure water inrush,largescale collapses,and tunnel boring machine(TBM)blockages.Addressing these challenges requires advanced detection technologies capable of long-distance,high-precision,and intelligent assessments of adverse geology.This paper presents a comprehensive review of recent advancements in tunnel geological ahead prospecting methods.It summarizes the fundamental principles,technical maturity,key challenges,development trends,and real-world applications of various detection techniques.Airborne and semi-airborne geophysical methods enable large-scale reconnaissance for initial surveys in complex terrain.Tunnel-and borehole-based approaches offer high-resolution detection during excavation,including seismic ahead prospecting(SAP),TBM rock-breaking source seismic methods,fulltime-domain tunnel induced polarization(TIP),borehole electrical resistivity,and ground penetrating radar(GPR).To address scenarios involving multiple,coexisting adverse geologies,intelligent inversion and geological identification methods have been developed based on multi-source data fusion and artificial intelligence(AI)techniques.Overall,these advances significantly improve detection range,resolution,and geological characterization capabilities.The methods demonstrate strong adaptability to complex environments and provide reliable subsurface information,supporting safer and more efficient tunnel construction.
基金the King Salman center for Disability Research for funding this work through Research Group No.KSRG-2024-050.
文摘Artificial Intelligence(AI)is changing healthcare by helping with diagnosis.However,for doctors to trust AI tools,they need to be both accurate and easy to understand.In this study,we created a new machine learning system for the early detection of Autism Spectrum Disorder(ASD)in children.Our main goal was to build a model that is not only good at predicting ASD but also clear in its reasoning.For this,we combined several different models,including Random Forest,XGBoost,and Neural Networks,into a single,more powerful framework.We used two different types of datasets:(i)a standard behavioral dataset and(ii)a more complex multimodal dataset with images,audio,and physiological information.The datasets were carefully preprocessed for missing values,redundant features,and dataset imbalance to ensure fair learning.The results outperformed the state-of-the-art with a Regularized Neural Network,achieving 97.6%accuracy on behavioral data.Whereas,on the multimodal data,the accuracy is 98.2%.Other models also did well with accuracies consistently above 96%.We also used SHAP and LIME on a behavioral dataset for models’explainability.
基金supported by the National Natural Science Foundation of China(No.51605054).
文摘Environmentalmonitoring systems based on remote sensing technology have a wider monitoringrange and longer timeliness, which makes them widely used in the detection andmanagement of pollution sources. However, haze weather conditions degrade image qualityand reduce the precision of environmental monitoring systems. To address this problem,this research proposes a remote sensing image dehazingmethod based on the atmosphericscattering model and a dark channel prior constrained network. The method consists ofa dehazing network, a dark channel information injection network (DCIIN), and a transmissionmap network. Within the dehazing network, the branch fusion module optimizesfeature weights to enhance the dehazing effect. By leveraging dark channel information,the DCIIN enables high-quality estimation of the atmospheric veil. To ensure the outputof the deep learning model aligns with physical laws, we reconstruct the haze image usingthe prediction results from the three networks. Subsequently, we apply the traditionalloss function and dark channel loss function between the reconstructed haze image and theoriginal haze image. This approach enhances interpretability and reliabilitywhile maintainingadherence to physical principles. Furthermore, the network is trained on a synthesizednon-homogeneous haze remote sensing dataset using dark channel information from cloudmaps. The experimental results show that the proposed network can achieve better imagedehazing on both synthetic and real remote sensing images with non-homogeneous hazedistribution. This research provides a new idea for solving the problem of decreased accuracyof environmental monitoring systems under haze weather conditions and has strongpracticability.
基金Supported by National Key Research and Development Program,No.2022YFC2407304Major Research Project for Middle-Aged and Young Scientists of Fujian Provincial Health Commission,No.2021ZQNZD013+2 种基金The National Natural Science Foundation of China,No.62275050Fujian Province Science and Technology Innovation Joint Fund Project,No.2019Y9108Major Science and Technology Projects of Fujian Province,No.2021YZ036017.
文摘BACKGROUND To investigate the preoperative factors influencing textbook outcomes(TO)in Intrahepatic cholangiocarcinoma(ICC)patients and evaluate the feasibility of an interpretable machine learning model for preoperative prediction of TO,we developed a machine learning model for preoperative prediction of TO and used the SHapley Additive exPlanations(SHAP)technique to illustrate the prediction process.AIM To analyze the factors influencing textbook outcomes before surgery and to establish interpretable machine learning models for preoperative prediction.METHODS A total of 376 patients diagnosed with ICC were retrospectively collected from four major medical institutions in China,covering the period from 2011 to 2017.Logistic regression analysis was conducted to identify preoperative variables associated with achieving TO.Based on these variables,an EXtreme Gradient Boosting(XGBoost)machine learning prediction model was constructed using the XGBoost package.The SHAP(package:Shapviz)algorithm was employed to visualize each variable's contribution to the model's predictions.Kaplan-Meier survival analysis was performed to compare the prognostic differences between the TO-achieving and non-TO-achieving groups.RESULTS Among 376 patients,287 were included in the training group and 89 in the validation group.Logistic regression identified the following preoperative variables influencing TO:Child-Pugh classification,Eastern Cooperative Oncology Group(ECOG)score,hepatitis B,and tumor size.The XGBoost prediction model demonstrated high accuracy in internal validation(AUC=0.8825)and external validation(AUC=0.8346).Survival analysis revealed that the disease-free survival rates for patients achieving TO at 1,2,and 3 years were 64.2%,56.8%,and 43.4%,respectively.CONCLUSION Child-Pugh classification,ECOG score,hepatitis B,and tumor size are preoperative predictors of TO.In both the training group and the validation group,the machine learning model had certain effectiveness in predicting TO before surgery.The SHAP algorithm provided intuitive visualization of the machine learning prediction process,enhancing its interpretability.
基金supports from the National Key Research and Development Program of China(No.2020YFE0202200)the National Natural Science Foundation of China(Nos.81903538,82322073,92253303)+1 种基金the Innovation Team and Talents Cultivation Program of National Administration of Traditional Chinese Medicine(No.ZYYCXTD-D-202004)the Science and Technology Commission of Shanghai Municipality(Nos.22ZR1474200,24JS2830200).
文摘Artificial intelligence(AI)has emerged as a transformative technology in accelerating drug discovery and development within natural medicines research.Natural medicines,characterized by their complex chemical compositions and multifaceted pharmacological mechanisms,demonstrate widespread application in treating diverse diseases.However,research and development face significant challenges,including component complexity,extraction difficulties,and efficacy validation.AI technology,particularly through deep learning(DL)and machine learning(ML)approaches,enables efficient analysis of extensive datasets,facilitating drug screening,component analysis,and pharmacological mechanism elucidation.The implementation of AI technology demonstrates considerable potential in virtual screening,compound optimization,and synthetic pathway design,thereby enhancing natural medicines’bioavailability and safety profiles.Nevertheless,current applications encounter limitations regarding data quality,model interpretability,and ethical considerations.As AI technologies continue to evolve,natural medicines research and development will achieve greater efficiency and precision,advancing both personalized medicine and contemporary drug development approaches.
基金Supported in part by Science Center for Gas Turbine Project(Project No.P2022-DC-I-003-001)National Natural Science Foundation of China(Grant No.52275130).
文摘Despite significant progress in the Prognostics and Health Management(PHM)domain using pattern learning systems from data,machine learning(ML)still faces challenges related to limited generalization and weak interpretability.A promising approach to overcoming these challenges is to embed domain knowledge into the ML pipeline,enhancing the model with additional pattern information.In this paper,we review the latest developments in PHM,encapsulated under the concept of Knowledge Driven Machine Learning(KDML).We propose a hierarchical framework to define KDML in PHM,which includes scientific paradigms,knowledge sources,knowledge representations,and knowledge embedding methods.Using this framework,we examine current research to demonstrate how various forms of knowledge can be integrated into the ML pipeline and provide roadmap to specific usage.Furthermore,we present several case studies that illustrate specific implementations of KDML in the PHM domain,including inductive experience,physical model,and signal processing.We analyze the improvements in generalization capability and interpretability that KDML can achieve.Finally,we discuss the challenges,potential applications,and usage recommendations of KDML in PHM,with a particular focus on the critical need for interpretability to ensure trustworthy deployment of artificial intelligence in PHM.