This study provides an in-depth comparative evaluation of landslide susceptibility using two distinct spatial units:and slope units(SUs)and hydrological response units(HRUs),within Goesan County,South Korea.Leveraging...This study provides an in-depth comparative evaluation of landslide susceptibility using two distinct spatial units:and slope units(SUs)and hydrological response units(HRUs),within Goesan County,South Korea.Leveraging the capabilities of the extreme gradient boosting(XGB)algorithm combined with Shapley Additive Explanations(SHAP),this work assesses the precision and clarity with which each unit predicts areas vulnerable to landslides.SUs focus on the geomorphological features like ridges and valleys,focusing on slope stability and landslide triggers.Conversely,HRUs are established based on a variety of hydrological factors,including land cover,soil type and slope gradients,to encapsulate the dynamic water processes of the region.The methodological framework includes the systematic gathering,preparation and analysis of data,ranging from historical landslide occurrences to topographical and environmental variables like elevation,slope angle and land curvature etc.The XGB algorithm used to construct the Landslide Susceptibility Model(LSM)was combined with SHAP for model interpretation and the results were evaluated using Random Cross-validation(RCV)to ensure accuracy and reliability.To ensure optimal model performance,the XGB algorithm’s hyperparameters were tuned using Differential Evolution,considering multicollinearity-free variables.The results show that SU and HRU are effective for LSM,but their effectiveness varies depending on landscape characteristics.The XGB algorithm demonstrates strong predictive power and SHAP enhances model transparency of the influential variables involved.This work underscores the importance of selecting appropriate assessment units tailored to specific landscape characteristics for accurate LSM.The integration of advanced machine learning techniques with interpretative tools offers a robust framework for landslide susceptibility assessment,improving both predictive capabilities and model interpretability.Future research should integrate broader data sets and explore hybrid analytical models to strengthen the generalizability of these findings across varied geographical settings.展开更多
The methods of network attacks have become increasingly sophisticated,rendering traditional cybersecurity defense mechanisms insufficient to address novel and complex threats effectively.In recent years,artificial int...The methods of network attacks have become increasingly sophisticated,rendering traditional cybersecurity defense mechanisms insufficient to address novel and complex threats effectively.In recent years,artificial intelligence has achieved significant progress in the field of network security.However,many challenges and issues remain,particularly regarding the interpretability of deep learning and ensemble learning algorithms.To address the challenge of enhancing the interpretability of network attack prediction models,this paper proposes a method that combines Light Gradient Boosting Machine(LGBM)and SHapley Additive exPlanations(SHAP).LGBM is employed to model anomalous fluctuations in various network indicators,enabling the rapid and accurate identification and prediction of potential network attack types,thereby facilitating the implementation of timely defense measures,the model achieved an accuracy of 0.977,precision of 0.985,recall of 0.975,and an F1 score of 0.979,demonstrating better performance compared to other models in the domain of network attack prediction.SHAP is utilized to analyze the black-box decision-making process of the model,providing interpretability by quantifying the contribution of each feature to the prediction results and elucidating the relationships between features.The experimental results demonstrate that the network attack predictionmodel based on LGBM exhibits superior accuracy and outstanding predictive capabilities.Moreover,the SHAP-based interpretability analysis significantly improves the model’s transparency and interpretability.展开更多
Predicting molecular properties is essential for advancing for advancing drug discovery and design. Recently, Graph Neural Networks (GNNs) have gained prominence due to their ability to capture the complex structural ...Predicting molecular properties is essential for advancing for advancing drug discovery and design. Recently, Graph Neural Networks (GNNs) have gained prominence due to their ability to capture the complex structural and relational information inherent in molecular graphs. Despite their effectiveness, the “black-box” nature of GNNs remains a significant obstacle to their widespread adoption in chemistry, as it hinders interpretability and trust. In this context, several explanation methods based on factual reasoning have emerged. These methods aim to interpret the predictions made by GNNs by analyzing the key features contributing to the prediction. However, these approaches fail to answer critical questions: “How to ensure that the structure-property mapping learned by GNNs is consistent with established domain knowledge”. In this paper, we propose MMGCF, a novel counterfactual explanation framework designed specifically for the prediction of GNN-based molecular properties. MMGCF constructs a hierarchical tree structure on molecular motifs, enabling the systematic generation of counterfactuals through motif perturbations. This framework identifies causally significant motifs and elucidates their impact on model predictions, offering insights into the relationship between structural modifications and predicted properties. Our method demonstrates its effectiveness through comprehensive quantitative and qualitative evaluations of four real-world molecular datasets.展开更多
Deep learning models have become a core technological tool in the field of medical image analysis.However,these models often suffer from a lack of transparency in their decision-making processes,leading to challenges ...Deep learning models have become a core technological tool in the field of medical image analysis.However,these models often suffer from a lack of transparency in their decision-making processes,leading to challenges related to trust and interpret ability in clinical applications.To address this issue,explainable artificial intelligence(XAI)techniques have been applied to medical image analysis.While showing promising potential,XAI also brings significant ethical risks in practice—most notably,the problem of spurious explanations.Such explanations may rise further concerns regarding patient privacy,data security,and the attribution of decisionmaking authority in medical contexts.This paper analyzes the application of XAI methods—particularly saliency aps—in medical image interpretation,identifies the underlying causes of spurious explanations,and proposes possible mitigation strategies.The aim is to contribute to the responsible and sustainable integration of explainable AI into clinical practice.展开更多
Accurate prediction of shield tunneling-induced settlement is a complex problem that requires consideration of many influential parameters.Recent studies reveal that machine learning(ML)algorithms can predict the sett...Accurate prediction of shield tunneling-induced settlement is a complex problem that requires consideration of many influential parameters.Recent studies reveal that machine learning(ML)algorithms can predict the settlement caused by tunneling.However,well-performing ML models are usually less interpretable.Irrelevant input features decrease the performance and interpretability of an ML model.Nonetheless,feature selection,a critical step in the ML pipeline,is usually ignored in most studies that focused on predicting tunneling-induced settlement.This study applies four techniques,i.e.Pearson correlation method,sequential forward selection(SFS),sequential backward selection(SBS)and Boruta algorithm,to investigate the effect of feature selection on the model’s performance when predicting the tunneling-induced maximum surface settlement(S_(max)).The data set used in this study was compiled from two metro tunnel projects excavated in Hangzhou,China using earth pressure balance(EPB)shields and consists of 14 input features and a single output(i.e.S_(max)).The ML model that is trained on features selected from the Boruta algorithm demonstrates the best performance in both the training and testing phases.The relevant features chosen from the Boruta algorithm further indicate that tunneling-induced settlement is affected by parameters related to tunnel geometry,geological conditions and shield operation.The recently proposed Shapley additive explanations(SHAP)method explores how the input features contribute to the output of a complex ML model.It is observed that the larger settlements are induced during shield tunneling in silty clay.Moreover,the SHAP analysis reveals that the low magnitudes of face pressure at the top of the shield increase the model’s output。展开更多
This paper takes a microanalytic perspective on the speech and gestures used by one teacher of ESL (English as a Second Language) in an intensive English program classroom. Videotaped excerpts from her intermediate-...This paper takes a microanalytic perspective on the speech and gestures used by one teacher of ESL (English as a Second Language) in an intensive English program classroom. Videotaped excerpts from her intermediate-level grammar course were transcribed to represent the speech, gesture and other non-verbal behavior that accompanied unplanned explanations of vocabulary that arose during three focus-on-form lessons. The gesture classification system of McNeill (1992), which delineates different types of hand movements (iconics metaphorics, deictics, beats), was used to understand the role the gestures played in these explanations. Results suggest that gestures and other non-verbal behavior are forms of input to classroom second language learners that must be considered a salient factor in classroom-based SLA (Second Language Acquisition) research展开更多
In this paper I examine the following claims by William Eaton in his monograph Boyle on Fire: (i) that Boyle's religious convictions led him to believe that the world was not completely explicable, and this shows ...In this paper I examine the following claims by William Eaton in his monograph Boyle on Fire: (i) that Boyle's religious convictions led him to believe that the world was not completely explicable, and this shows that there is a shortcoming in the power of mechanical explanations; (ii) that mechanical explanations offer only sufficient, not necessary explanations, and this too was taken by Boyle to be a limit in the explanatory power of mechanical explanations; (iii) that the mature Boyle thought that there could be more intelligible explanatory models than mechanism; and (iv) that what Boyle says at any point in his career is incompatible with the statement of Maria Boas-Hall, i.e., that the mechanical hypothesis can explicate all natural phenomena. Since all four of these claims are part of Eaton's developmental argument, my rejection of them will not only show how the particular developmental story Eaton diagnoses is inaccurate, but will also explain what limits there actually are in Boyle's account of the intelligibility of mechanical explanations. My account will also show why important philosophers like Locke and Leibniz should be interested in Boyle's philosophical work.展开更多
Color has emerged as a pivotal factor influencing consumer purchasing decisions in the dried herbal medicine market.To address the issue of significant discoloration of Rhubarb(Rheum rhabarbarum L.)during the drying p...Color has emerged as a pivotal factor influencing consumer purchasing decisions in the dried herbal medicine market.To address the issue of significant discoloration of Rhubarb(Rheum rhabarbarum L.)during the drying process,this study investigates the effects of microwave fixation(MF)and hot-air fixation(HAF)pretreatment methods on the drying characteristics and quality of Rhubarb by ultrasonic synergistic vacuum far-infrared drying(U-VFID),with a primary focus on its color attributes.The results indicate that fixation pretreatment significantly enhances both drying efficiency and product quality,particularly in terms of color retention.Compared to unpretreated Rhubarb,the best comprehensive drying effect was achieved with MF treatment at 60℃for 7 min,which resulted in a 441.18%increase in rhein content,a 58.57%reduction in drying time,and a 48.38%decrease in theΔE value.Furthermore,correlation analysis,and the eXtreme Gradient Boosting(XGBoost)algorithm combined with SHapley Additive exPlanations(SHAP)model,revealed that the color of Rhubarb subjected to various fixation pretreatments in conjunction with U-VFID is primarily influenced by sennoside A content,total phenolic content(TPC),and drying time.This study offers a scientific foundation and theoretical insights for optimizing the quality of dried medicinal plant products and introduces innovative approaches for post-harvest industrial pretreatment of rhizomatous medicinal plants.展开更多
Wind power forecasting(WPF)is important for safe,stable,and reliable integration of new energy technologies into power systems.Machine learning(ML)algorithms have recently attracted increasing attention in the field o...Wind power forecasting(WPF)is important for safe,stable,and reliable integration of new energy technologies into power systems.Machine learning(ML)algorithms have recently attracted increasing attention in the field of WPF.However,opaque decisions and lack of trustworthiness of black-box models for WPF could cause scheduling risks.This study develops a method for identifying risky models in practical applications and avoiding the risks.First,a local interpretable model-agnostic explanations algorithm is introduced and improved for WPF model analysis.On that basis,a novel index is presented to quantify the level at which neural networks or other black-box models can trust features involved in training.Then,by revealing the operational mechanism for local samples,human interpretability of the black-box model is examined under different accuracies,time horizons,and seasons.This interpretability provides a basis for several technical routes for WPF from the viewpoint of the forecasting model.Moreover,further improvements in accuracy of WPF are explored by evaluating possibilities of using interpretable ML models that use multi-horizons global trust modeling and multi-seasons interpretable feature selection methods.Experimental results from a wind farm in China show that error can be robustly reduced.展开更多
Generative Artificial Intelligence(GAI)refers to a class of AI systems capable of creating novel,coherent,and contextually relevant content—such as text,images,audio,and video—based on patterns learned from extensiv...Generative Artificial Intelligence(GAI)refers to a class of AI systems capable of creating novel,coherent,and contextually relevant content—such as text,images,audio,and video—based on patterns learned from extensive training datasets.The public release and rapid refinement of large language models(LLMs)like ChatGPT have accelerated the adoption of GAI across various medical specialties,offering new tools for education,clinical simulation,and research.Dermatology training,which heavily relies on visual pattern recognition and requires extensive exposure to diverse morphological presentations,faces persistent challenges such as uneven distribu-tion of educational resources,limited patient exposure for rare conditions,and variability in teaching quality.Exploring the integration of GAI into pedagogical frameworks offers innovative approaches to address these challenges,potentially enhancing the quality,standardization,scalability,and accessibility of dermatology ed-ucation.This comprehensive review examines the core concepts and technical foundations of GAI,highlights its specific applications within dermatology teaching and learning—including simulated case generation,per-sonalized learning pathways,and academic support—and discusses the current limitations,practical challenges,and ethical considerations surrounding its use.The aim is to provide a balanced perspective on the significant potential of GAI for transforming dermatology education and to offer evidence-based insights to guide future exploration,implementation,and policy development.展开更多
Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learni...Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learning(DL)approaches often face several limitations,including inefficient feature extraction,class imbalance,suboptimal classification performance,and limited interpretability,which collectively hinder their deployment in clinical settings.To address these challenges,we propose a novel DL framework for heart disease prediction that integrates a comprehensive preprocessing pipeline with an advanced classification architecture.The preprocessing stage involves label encoding and feature scaling.To address the issue of class imbalance inherent in the personal key indicators of the heart disease dataset,the localized random affine shadowsampling technique is employed,which enhances minority class representation while minimizing overfitting.At the core of the framework lies the Deep Residual Network(DeepResNet),which employs hierarchical residual transformations to facilitate efficient feature extraction and capture complex,non-linear relationships in the data.Experimental results demonstrate that the proposed model significantly outperforms existing techniques,achieving improvements of 3.26%in accuracy,3.16%in area under the receiver operating characteristics,1.09%in recall,and 1.07%in F1-score.Furthermore,robustness is validated using 10-fold crossvalidation,confirming the model’s generalizability across diverse data distributions.Moreover,model interpretability is ensured through the integration of Shapley additive explanations and local interpretable model-agnostic explanations,offering valuable insights into the contribution of individual features to model predictions.Overall,the proposed DL framework presents a robust,interpretable,and clinically applicable solution for heart disease prediction.展开更多
Urban rainwater runoff is an important source of nonpoint source pollution due to its transport of diverse contaminants,including polycyclic aromatic hydrocarbons(PAHs)and chlorinated derivatives.Importantly,these chl...Urban rainwater runoff is an important source of nonpoint source pollution due to its transport of diverse contaminants,including polycyclic aromatic hydrocarbons(PAHs)and chlorinated derivatives.Importantly,these chlorinated polycyclic aromatic hydrocarbons(Cl-PAHs)exhibit elevated toxicological potential compared to their non-halogenated parent compounds.In this study,we proposed an approach that combined multivariate receptor model with integration of SHapley Additive exPlanations and Random Forest model.This method identifies the possible sources and reveals the impact of source apportionment results and environmental driving factors(such as geographical and meteorological data)on pollutant concentrations.Sixteen PAHs and nine ClPAHs were detected in 79 runoff samples from all three sites.TheΣ_(16)PAHs average concentration(2923.93 to 6071.83 ng/L)was significantly higher than theΣ_(9)Cl-PAHs(384.34 to 1314.73 ng/L).The source apportionment was conducted by positive matrix factorization(PMF),and six potential pollution sources for PAHs and three for Cl-PAHs were quantified.PAHs primarily originate from the combustion of fossil fuels such as traffic,industrial emissions and coal tar,while Cl-PAHs are mainly derived from atmospheric deposition and industrial emissions.Meanwhile,the self‑organizing map classified PAHs and Cl-PAHs into 2 and 3 groups,respectively.The k-means algorithm yielded 4 clusters for runoff samples.Among machine learning models,Random Forest(RF)demonstrated optimal predictive performance and integrated with SHapley Additive exPlanations(RF-SHAP)revealed the effects of driving factors on the predicted concentration of PAHs and Cl-PAHs in urban runoff samples.展开更多
This study addresses the challenge of predicting zinc(Zn)recovery from carbonate ores via sodium hydroxide(NaOH)leaching.This complex process influenced by variable ore composition,surface passivation effects,and nonl...This study addresses the challenge of predicting zinc(Zn)recovery from carbonate ores via sodium hydroxide(NaOH)leaching.This complex process influenced by variable ore composition,surface passivation effects,and nonlinear reaction dynamics,which complicate reagent optimization and process control in hydrometallurgical operations.To tackle this,a dataset containing 422 experimental observations was compiled from previous studies,incorporating ore composition and process parameters,such as NaOH concentration,leaching time,temperature,and solid-to-liquid ratio.Four regression models(decision tree,neural network,generalized additive model,and random forest)were trained and evaluated using performance metrics,such as coefficient of determination(R^(2)),root mean squared error(RMSE),mean absolute error(MAE),mean absolute percentage error(MAPE),and symmetrical mean absolute percentage error(SMAPE).Among these,the random forest model achieved the best predictive accuracy,with R^(2)value of 0.8541 on the test set and the lowest error rates,demonstrating its effectiveness in capturing the complex relationships between input variables and Zn recovery.Explainable artificial intelligence,particularly SHapley additive exPlanations(SHAP)analysis,revealed that NaOH concentration,leaching time,and solid-to-liquid ratio had the most positive influence on Zn recovery,whereas elements such as Ca,Fe,and Pb had inhibitory effects.These findings align with known geochemical behavior and provide valuable insights for reagent optimization and process effi-ciency in leaching processes.This study demonstrates the practical potential of machine learning in mineral processing,offering a scalable framework for optimizing Zn recovery from non-sulfide ores and a data-driven approach to enhance decision-making in hydrometallurgical applications.展开更多
To investigate the complex relationship between rolling process parameters and mechanical properties of AZ31 magnesium alloy rolled sheets,the Leave-One-Out Cross-Validation(LOOCV)and parameter tuning were applied to ...To investigate the complex relationship between rolling process parameters and mechanical properties of AZ31 magnesium alloy rolled sheets,the Leave-One-Out Cross-Validation(LOOCV)and parameter tuning were applied to optimizing hyper-parameters for the four(BPNN,SVR,RF,and KNN)machine learning models.An interpretable prediction model based on machine learning and SHapley Additive exPlanations(SHAP),as well as an analytical method combining the SHAP model and the Pearson Correlation Coefficient(PCC),were proposed.The results showed that among the four models,the SVR model was able to simultaneously and accurately predict the ultimate tensile strength(UTS)and elongation(EL).According to the combination analysis of PCC and the magnesium alloy rolling forming mechanism,it was found that strain rate and reduction displayed a negative and positive correlation with UTS,respectively,while rolling temperature and reduction illustrated a positive and negative correlation with EL,respectively.Through the SHAP method,which could interpret the output results of the SVR machine learning model,it was deduced that reduction and strain rate played an important role in the SVR model of the outputs of the UTS and EL,respectively.Combining SHAP with PCC,it was found that strain rate and reduction had a greater influence on the UTS than rolling temperature,whereas strain rate and rolling temperature had more influence on the EL compared to reduction.展开更多
Inverse design of advanced materials represents a pivotal challenge in materials science.Leveraging the latent space of Variational Autoencoders(VAEs)for material optimization has emerged as a significant advancement ...Inverse design of advanced materials represents a pivotal challenge in materials science.Leveraging the latent space of Variational Autoencoders(VAEs)for material optimization has emerged as a significant advancement in the field of material inverse design.However,VAEs are inherently prone to generating blurred images,posing challenges for precise inverse design and microstructure manufacturing.While increasing the dimensionality of the VAE latent space can mitigate reconstruction blurriness to some extent,it simultaneously imposes a substantial burden on target optimization due to an excessively high search space.To address these limitations,this study adopts a Variational Autoencoder guided Conditional Diffusion Generative Model(VAE-CDGM)framework integrated with Bayesian optimization to achieve the inverse design of composite materials with targeted mechanical properties.The VAE-CDGM model synergizes the strengths of VAEs and Denoising Diffusion Probabilistic Models(DDPM),enabling the generation of high-quality,sharp images while preserving a manipulable latent space.To accommodate varying dimensional requirements of the latent space,two optimization strategies are proposed.When the latent space dimensionality is excessively high,SHapley Additive exPlanations(SHAP)sensitivity analysis is employed to identify critical latent features for optimization within a reduced subspace.Conversely,direct optimization is performed in the low-dimensional latent space of VAE-CDGM when dimensionality is modest.The results demonstrate that both strategies accurately achieve the targeted design of composite materials while circumventing the blurred reconstruction flaws of VAEs,which offers a novel pathway for the precise design of advanced materials.展开更多
In the arid regions of Northwest China,vegetation cover plays a crucial role in maintaining unique terrestrial ecosystems.Vegetation growth is highly sensitive to variations in topographical factors,and the influence ...In the arid regions of Northwest China,vegetation cover plays a crucial role in maintaining unique terrestrial ecosystems.Vegetation growth is highly sensitive to variations in topographical factors,and the influence of topography on vegetation cover has attracted increasing attention.This study analyzed vegetation dynamics and their relationship with topography in the Tianshan Mountains of China using Landsat Normalized Difference Vegetation Index(NDVI)data during 2000–2022 and Shuttle Radar Topography Mission(SRTM)-derived topographical factors(elevation,slope,and aspect).Theil-Sen slope estimation and Mann-Kendall trend tests were applied to quantify temporal changes in vegetation,while a terrain area correction coefficient(K)was used to assess spatial associations of vegetation with topography.Random Forest(RF)regression and SHapley Additive exPlanations(SHAP)analysis evaluated the relative importance of topographical factors in shaping vegetation cover(multi-year mean NDVI)distribution.Key findings included that over the 23-a period,59.46%of the vegetated area exhibited significant improvement(P<0.05),with the southern Tianshan Mountains showing the most pronounced increase(70.59%),whereas vegetation degradation(3.10%)was primarily concentrated in river valleys with intensive human activities.RF-SHAP analysis revealed that elevation is the primary driver of vegetation cover patterns,explaining 52.00%of the NDVI variation.The peak NDVI(0.42)occurred at elevations between 2800 and 3200 m.Slope and aspect also significantly influenced vegetation distribution,and higher NDVI values and greater improvement trends were observed on shady(north-facing)slopes compared to sunny(south-facing)slopes.K-index analysis indicated pronounced vegetation change—both degradation and improvement—in areas with elevations between 1100 and 2800 m and slopes exceeding 5°,particularly on sunny slopes.Low-elevation desert areas in the southern Tianshan Mountains were highly susceptible to degradation.This study underscores the critical role of topography in regulating vegetation cover and its spatiotemporal dynamics,providing a scientific basis for sustainable management of arid mountain ecosystems.展开更多
Within the context of global climate change and rapid urbanization,increasing urban resilience(UR)is especially important in the arid region of Northwest China(ANC),where fragile ecosystems and an uneven water distrib...Within the context of global climate change and rapid urbanization,increasing urban resilience(UR)is especially important in the arid region of Northwest China(ANC),where fragile ecosystems and an uneven water distribution create significant sustainability challenges.In this study,a coupled UR-water ecosystem services(WESs)framework was developed on the basis of 1-km resolution remote sensing data for the 2000–2020 period obtained from the Landsat series,Defense Meteorological Satellite Program(DMSP)/Operational Linescan System(OLS),and Global Precipitation Measurement(GPM),among other sources.Within the framework,the Integrated Valuation of Ecosystem Services and Tradeoffs(InVEST)model was incorporated to provide a WES indicator system.Moreover,entropy weighting was employed to quantify both UR and WES indicators;the coupling coordination degree(CCD)model was used to measure the coupled relationship between UR and WESs;an extreme gradient boosting(XGBoost)-SHapley Additive exPlanations(SHAP)interpretation approach was adopted to identify key drivers and underlying mechanisms;and Geographically Weighted Regression(GWR)was applied to capture spatial distribution characteristics of major driving factors.The results indicated that UR steadily increased from 4.60×10^(-3) to 10.24×10^(-3),whereas WESs followed an inverted V-shaped trend,with a peak value observed in 2010(11.84×10^(-3)).The CCD remained consistently low(mean:0.0166–0.0246)and exhibited considerable spatial heterogeneity.Notably,the degree of coordination was greater in the oasis and mountain core areas but significantly lower at desert areas.XGBoost-SHAP model analysis revealed six key drivers influencing various states of the CCD between UR and WESs systems.The contributions of these factors could be ranked as follows:water yield(WY;24.30%)>farmland area per capita(FP;21.10%)>gross domestic product(GDP)per capita(GDPC;19.00%)>soil retention(SR;14.90%)>population density(PD;5.42%)>water purification(WP;4.40%).In contrast,in UR system,WP(53.66%)and SR(31.62%)served as the dominant drivers.Moreover,the dominant drivers shifted from a combination of natural and socioeconomic factors in StateⅠ(sustainable high resilience)to primarily socioeconomic factors in StateⅢ(unsustainable low resilience).SR and WP exerted positive moderating effects,whereas socioeconomic factors such as GDPC and PD exerted inhibitory effects on the coordination relationship.This research highlights that UR in the ANC region is limited mainly by water scarcity,weak feedback loops,and spatial variability,emphasizing the need for tailored intervention strategies.展开更多
Arid mountain ecosystems are highly sensitive to hydrothermal stress and land use intensification,yet where net primary productivity(NPP)degradation is likely to persist and what drives it remain unclear in the Tiansh...Arid mountain ecosystems are highly sensitive to hydrothermal stress and land use intensification,yet where net primary productivity(NPP)degradation is likely to persist and what drives it remain unclear in the Tianshan Mountains of Northwest China.We integrated multi-source remote sensing with the Carnegie–Ames–Stanford Approach(CASA)model to estimate NPP during 2000–2020,assessed trend persistence using the Hurst exponent,and identified key drivers and nonlinear thresholds with Extreme Gradient Boosting(XGBoost)and SHapley Additive exPlanations(SHAP).Total NPP averaged 55.74 Tg C/a and ranged from 48.07 to 65.91 Tg C/a from 2000 to 2020,while regional mean NPP rose from 138.97 to 160.69 g C/(m^(2)·a).Land use transfer analysis showed that grassland expanded mainly at the expense of unutilized land and that cropland increased overall.Although NPP increased across 64.11%of the region during 2000–2020,persistence analysis suggested that 53.93%of the Tianshan Mountains was prone to continued NPP decline,including 36.41%with significant projected decline and 17.52%with weak projected decline;these areas formed degradation hotspots concentrated in the central and northern Tianshan Mountains.In contrast,potential improvement was limited(strong persistent improvement:4.97%;strong anti-persistent improvement:0.36%).Driver attribution indicated that land use dominated NPP variability(mean absolute SHAP value=29.54%),followed by precipitation(16.03%)and temperature(11.05%).SHAP dependence analyses showed that precipitation effects stabilized at 300.00–400.00 mm,and temperature exhibited an inverted U-shaped response with a peak near 0.00°C.These findings indicated that persistent degradation risk arose from hydrothermal constraints interacting with land use conversion,highlighting the need for threshold-informed,spatially targeted management to sustain carbon sequestration in arid mountain ecosystems.展开更多
基金supported by a National Research Foundation of Korea(NRF)grant funded by the Korean government(MSIT)(RS-2023-00222536).
文摘This study provides an in-depth comparative evaluation of landslide susceptibility using two distinct spatial units:and slope units(SUs)and hydrological response units(HRUs),within Goesan County,South Korea.Leveraging the capabilities of the extreme gradient boosting(XGB)algorithm combined with Shapley Additive Explanations(SHAP),this work assesses the precision and clarity with which each unit predicts areas vulnerable to landslides.SUs focus on the geomorphological features like ridges and valleys,focusing on slope stability and landslide triggers.Conversely,HRUs are established based on a variety of hydrological factors,including land cover,soil type and slope gradients,to encapsulate the dynamic water processes of the region.The methodological framework includes the systematic gathering,preparation and analysis of data,ranging from historical landslide occurrences to topographical and environmental variables like elevation,slope angle and land curvature etc.The XGB algorithm used to construct the Landslide Susceptibility Model(LSM)was combined with SHAP for model interpretation and the results were evaluated using Random Cross-validation(RCV)to ensure accuracy and reliability.To ensure optimal model performance,the XGB algorithm’s hyperparameters were tuned using Differential Evolution,considering multicollinearity-free variables.The results show that SU and HRU are effective for LSM,but their effectiveness varies depending on landscape characteristics.The XGB algorithm demonstrates strong predictive power and SHAP enhances model transparency of the influential variables involved.This work underscores the importance of selecting appropriate assessment units tailored to specific landscape characteristics for accurate LSM.The integration of advanced machine learning techniques with interpretative tools offers a robust framework for landslide susceptibility assessment,improving both predictive capabilities and model interpretability.Future research should integrate broader data sets and explore hybrid analytical models to strengthen the generalizability of these findings across varied geographical settings.
基金supported by the National Natural Science Foundation of China Project(No.62302540)please visit their website at https://www.nsfc.gov.cn/(accessed on 18 June 2024).
文摘The methods of network attacks have become increasingly sophisticated,rendering traditional cybersecurity defense mechanisms insufficient to address novel and complex threats effectively.In recent years,artificial intelligence has achieved significant progress in the field of network security.However,many challenges and issues remain,particularly regarding the interpretability of deep learning and ensemble learning algorithms.To address the challenge of enhancing the interpretability of network attack prediction models,this paper proposes a method that combines Light Gradient Boosting Machine(LGBM)and SHapley Additive exPlanations(SHAP).LGBM is employed to model anomalous fluctuations in various network indicators,enabling the rapid and accurate identification and prediction of potential network attack types,thereby facilitating the implementation of timely defense measures,the model achieved an accuracy of 0.977,precision of 0.985,recall of 0.975,and an F1 score of 0.979,demonstrating better performance compared to other models in the domain of network attack prediction.SHAP is utilized to analyze the black-box decision-making process of the model,providing interpretability by quantifying the contribution of each feature to the prediction results and elucidating the relationships between features.The experimental results demonstrate that the network attack predictionmodel based on LGBM exhibits superior accuracy and outstanding predictive capabilities.Moreover,the SHAP-based interpretability analysis significantly improves the model’s transparency and interpretability.
文摘Predicting molecular properties is essential for advancing for advancing drug discovery and design. Recently, Graph Neural Networks (GNNs) have gained prominence due to their ability to capture the complex structural and relational information inherent in molecular graphs. Despite their effectiveness, the “black-box” nature of GNNs remains a significant obstacle to their widespread adoption in chemistry, as it hinders interpretability and trust. In this context, several explanation methods based on factual reasoning have emerged. These methods aim to interpret the predictions made by GNNs by analyzing the key features contributing to the prediction. However, these approaches fail to answer critical questions: “How to ensure that the structure-property mapping learned by GNNs is consistent with established domain knowledge”. In this paper, we propose MMGCF, a novel counterfactual explanation framework designed specifically for the prediction of GNN-based molecular properties. MMGCF constructs a hierarchical tree structure on molecular motifs, enabling the systematic generation of counterfactuals through motif perturbations. This framework identifies causally significant motifs and elucidates their impact on model predictions, offering insights into the relationship between structural modifications and predicted properties. Our method demonstrates its effectiveness through comprehensive quantitative and qualitative evaluations of four real-world molecular datasets.
文摘Deep learning models have become a core technological tool in the field of medical image analysis.However,these models often suffer from a lack of transparency in their decision-making processes,leading to challenges related to trust and interpret ability in clinical applications.To address this issue,explainable artificial intelligence(XAI)techniques have been applied to medical image analysis.While showing promising potential,XAI also brings significant ethical risks in practice—most notably,the problem of spurious explanations.Such explanations may rise further concerns regarding patient privacy,data security,and the attribution of decisionmaking authority in medical contexts.This paper analyzes the application of XAI methods—particularly saliency aps—in medical image interpretation,identifies the underlying causes of spurious explanations,and proposes possible mitigation strategies.The aim is to contribute to the responsible and sustainable integration of explainable AI into clinical practice.
基金support provided by The Science and Technology Development Fund,Macao SAR,China(File Nos.0057/2020/AGJ and SKL-IOTSC-2021-2023)Science and Technology Program of Guangdong Province,China(Grant No.2021A0505080009).
文摘Accurate prediction of shield tunneling-induced settlement is a complex problem that requires consideration of many influential parameters.Recent studies reveal that machine learning(ML)algorithms can predict the settlement caused by tunneling.However,well-performing ML models are usually less interpretable.Irrelevant input features decrease the performance and interpretability of an ML model.Nonetheless,feature selection,a critical step in the ML pipeline,is usually ignored in most studies that focused on predicting tunneling-induced settlement.This study applies four techniques,i.e.Pearson correlation method,sequential forward selection(SFS),sequential backward selection(SBS)and Boruta algorithm,to investigate the effect of feature selection on the model’s performance when predicting the tunneling-induced maximum surface settlement(S_(max)).The data set used in this study was compiled from two metro tunnel projects excavated in Hangzhou,China using earth pressure balance(EPB)shields and consists of 14 input features and a single output(i.e.S_(max)).The ML model that is trained on features selected from the Boruta algorithm demonstrates the best performance in both the training and testing phases.The relevant features chosen from the Boruta algorithm further indicate that tunneling-induced settlement is affected by parameters related to tunnel geometry,geological conditions and shield operation.The recently proposed Shapley additive explanations(SHAP)method explores how the input features contribute to the output of a complex ML model.It is observed that the larger settlements are induced during shield tunneling in silty clay.Moreover,the SHAP analysis reveals that the low magnitudes of face pressure at the top of the shield increase the model’s output。
文摘This paper takes a microanalytic perspective on the speech and gestures used by one teacher of ESL (English as a Second Language) in an intensive English program classroom. Videotaped excerpts from her intermediate-level grammar course were transcribed to represent the speech, gesture and other non-verbal behavior that accompanied unplanned explanations of vocabulary that arose during three focus-on-form lessons. The gesture classification system of McNeill (1992), which delineates different types of hand movements (iconics metaphorics, deictics, beats), was used to understand the role the gestures played in these explanations. Results suggest that gestures and other non-verbal behavior are forms of input to classroom second language learners that must be considered a salient factor in classroom-based SLA (Second Language Acquisition) research
文摘In this paper I examine the following claims by William Eaton in his monograph Boyle on Fire: (i) that Boyle's religious convictions led him to believe that the world was not completely explicable, and this shows that there is a shortcoming in the power of mechanical explanations; (ii) that mechanical explanations offer only sufficient, not necessary explanations, and this too was taken by Boyle to be a limit in the explanatory power of mechanical explanations; (iii) that the mature Boyle thought that there could be more intelligible explanatory models than mechanism; and (iv) that what Boyle says at any point in his career is incompatible with the statement of Maria Boas-Hall, i.e., that the mechanical hypothesis can explicate all natural phenomena. Since all four of these claims are part of Eaton's developmental argument, my rejection of them will not only show how the particular developmental story Eaton diagnoses is inaccurate, but will also explain what limits there actually are in Boyle's account of the intelligibility of mechanical explanations. My account will also show why important philosophers like Locke and Leibniz should be interested in Boyle's philosophical work.
基金supported by the Young Mentor Fund project of Gansu Agricultural University[grant number 0522014]the Gansu Provincial Science and Technology Plan[grant number 23CXNA0017].
文摘Color has emerged as a pivotal factor influencing consumer purchasing decisions in the dried herbal medicine market.To address the issue of significant discoloration of Rhubarb(Rheum rhabarbarum L.)during the drying process,this study investigates the effects of microwave fixation(MF)and hot-air fixation(HAF)pretreatment methods on the drying characteristics and quality of Rhubarb by ultrasonic synergistic vacuum far-infrared drying(U-VFID),with a primary focus on its color attributes.The results indicate that fixation pretreatment significantly enhances both drying efficiency and product quality,particularly in terms of color retention.Compared to unpretreated Rhubarb,the best comprehensive drying effect was achieved with MF treatment at 60℃for 7 min,which resulted in a 441.18%increase in rhein content,a 58.57%reduction in drying time,and a 48.38%decrease in theΔE value.Furthermore,correlation analysis,and the eXtreme Gradient Boosting(XGBoost)algorithm combined with SHapley Additive exPlanations(SHAP)model,revealed that the color of Rhubarb subjected to various fixation pretreatments in conjunction with U-VFID is primarily influenced by sennoside A content,total phenolic content(TPC),and drying time.This study offers a scientific foundation and theoretical insights for optimizing the quality of dried medicinal plant products and introduces innovative approaches for post-harvest industrial pretreatment of rhizomatous medicinal plants.
基金supported by the National Key R&D Program of China(Technology and application of wind power/photovoltaic power prediction for promoting renewable energy consumption)under Grant(2018YFB0904200).
文摘Wind power forecasting(WPF)is important for safe,stable,and reliable integration of new energy technologies into power systems.Machine learning(ML)algorithms have recently attracted increasing attention in the field of WPF.However,opaque decisions and lack of trustworthiness of black-box models for WPF could cause scheduling risks.This study develops a method for identifying risky models in practical applications and avoiding the risks.First,a local interpretable model-agnostic explanations algorithm is introduced and improved for WPF model analysis.On that basis,a novel index is presented to quantify the level at which neural networks or other black-box models can trust features involved in training.Then,by revealing the operational mechanism for local samples,human interpretability of the black-box model is examined under different accuracies,time horizons,and seasons.This interpretability provides a basis for several technical routes for WPF from the viewpoint of the forecasting model.Moreover,further improvements in accuracy of WPF are explored by evaluating possibilities of using interpretable ML models that use multi-horizons global trust modeling and multi-seasons interpretable feature selection methods.Experimental results from a wind farm in China show that error can be robustly reduced.
文摘Generative Artificial Intelligence(GAI)refers to a class of AI systems capable of creating novel,coherent,and contextually relevant content—such as text,images,audio,and video—based on patterns learned from extensive training datasets.The public release and rapid refinement of large language models(LLMs)like ChatGPT have accelerated the adoption of GAI across various medical specialties,offering new tools for education,clinical simulation,and research.Dermatology training,which heavily relies on visual pattern recognition and requires extensive exposure to diverse morphological presentations,faces persistent challenges such as uneven distribu-tion of educational resources,limited patient exposure for rare conditions,and variability in teaching quality.Exploring the integration of GAI into pedagogical frameworks offers innovative approaches to address these challenges,potentially enhancing the quality,standardization,scalability,and accessibility of dermatology ed-ucation.This comprehensive review examines the core concepts and technical foundations of GAI,highlights its specific applications within dermatology teaching and learning—including simulated case generation,per-sonalized learning pathways,and academic support—and discusses the current limitations,practical challenges,and ethical considerations surrounding its use.The aim is to provide a balanced perspective on the significant potential of GAI for transforming dermatology education and to offer evidence-based insights to guide future exploration,implementation,and policy development.
基金funded by Ongoing Research Funding Program for Project number(ORF-2025-648),King Saud University,Riyadh,Saudi Arabia.
文摘Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learning(DL)approaches often face several limitations,including inefficient feature extraction,class imbalance,suboptimal classification performance,and limited interpretability,which collectively hinder their deployment in clinical settings.To address these challenges,we propose a novel DL framework for heart disease prediction that integrates a comprehensive preprocessing pipeline with an advanced classification architecture.The preprocessing stage involves label encoding and feature scaling.To address the issue of class imbalance inherent in the personal key indicators of the heart disease dataset,the localized random affine shadowsampling technique is employed,which enhances minority class representation while minimizing overfitting.At the core of the framework lies the Deep Residual Network(DeepResNet),which employs hierarchical residual transformations to facilitate efficient feature extraction and capture complex,non-linear relationships in the data.Experimental results demonstrate that the proposed model significantly outperforms existing techniques,achieving improvements of 3.26%in accuracy,3.16%in area under the receiver operating characteristics,1.09%in recall,and 1.07%in F1-score.Furthermore,robustness is validated using 10-fold crossvalidation,confirming the model’s generalizability across diverse data distributions.Moreover,model interpretability is ensured through the integration of Shapley additive explanations and local interpretable model-agnostic explanations,offering valuable insights into the contribution of individual features to model predictions.Overall,the proposed DL framework presents a robust,interpretable,and clinically applicable solution for heart disease prediction.
基金supported by Guangdong Basic and Applied Basic Research Foundation(Nos.2021B1515120055 and 2022A1515010499).
文摘Urban rainwater runoff is an important source of nonpoint source pollution due to its transport of diverse contaminants,including polycyclic aromatic hydrocarbons(PAHs)and chlorinated derivatives.Importantly,these chlorinated polycyclic aromatic hydrocarbons(Cl-PAHs)exhibit elevated toxicological potential compared to their non-halogenated parent compounds.In this study,we proposed an approach that combined multivariate receptor model with integration of SHapley Additive exPlanations and Random Forest model.This method identifies the possible sources and reveals the impact of source apportionment results and environmental driving factors(such as geographical and meteorological data)on pollutant concentrations.Sixteen PAHs and nine ClPAHs were detected in 79 runoff samples from all three sites.TheΣ_(16)PAHs average concentration(2923.93 to 6071.83 ng/L)was significantly higher than theΣ_(9)Cl-PAHs(384.34 to 1314.73 ng/L).The source apportionment was conducted by positive matrix factorization(PMF),and six potential pollution sources for PAHs and three for Cl-PAHs were quantified.PAHs primarily originate from the combustion of fossil fuels such as traffic,industrial emissions and coal tar,while Cl-PAHs are mainly derived from atmospheric deposition and industrial emissions.Meanwhile,the self‑organizing map classified PAHs and Cl-PAHs into 2 and 3 groups,respectively.The k-means algorithm yielded 4 clusters for runoff samples.Among machine learning models,Random Forest(RF)demonstrated optimal predictive performance and integrated with SHapley Additive exPlanations(RF-SHAP)revealed the effects of driving factors on the predicted concentration of PAHs and Cl-PAHs in urban runoff samples.
文摘This study addresses the challenge of predicting zinc(Zn)recovery from carbonate ores via sodium hydroxide(NaOH)leaching.This complex process influenced by variable ore composition,surface passivation effects,and nonlinear reaction dynamics,which complicate reagent optimization and process control in hydrometallurgical operations.To tackle this,a dataset containing 422 experimental observations was compiled from previous studies,incorporating ore composition and process parameters,such as NaOH concentration,leaching time,temperature,and solid-to-liquid ratio.Four regression models(decision tree,neural network,generalized additive model,and random forest)were trained and evaluated using performance metrics,such as coefficient of determination(R^(2)),root mean squared error(RMSE),mean absolute error(MAE),mean absolute percentage error(MAPE),and symmetrical mean absolute percentage error(SMAPE).Among these,the random forest model achieved the best predictive accuracy,with R^(2)value of 0.8541 on the test set and the lowest error rates,demonstrating its effectiveness in capturing the complex relationships between input variables and Zn recovery.Explainable artificial intelligence,particularly SHapley additive exPlanations(SHAP)analysis,revealed that NaOH concentration,leaching time,and solid-to-liquid ratio had the most positive influence on Zn recovery,whereas elements such as Ca,Fe,and Pb had inhibitory effects.These findings align with known geochemical behavior and provide valuable insights for reagent optimization and process effi-ciency in leaching processes.This study demonstrates the practical potential of machine learning in mineral processing,offering a scalable framework for optimizing Zn recovery from non-sulfide ores and a data-driven approach to enhance decision-making in hydrometallurgical applications.
基金supported by the National Natural Science Foundation of China(Nos.52471132,52475356,52071139,U21A20130)the National Social Science Fund of China(No.21BJL075)+1 种基金the Natural Science Foundation of Fujian Province for Distinguished Young Scholars,China(No.2024J010031)the Natural Science Foundation of Chongqing,China(No.CSTB2023NSCQ-MSX0886)。
文摘To investigate the complex relationship between rolling process parameters and mechanical properties of AZ31 magnesium alloy rolled sheets,the Leave-One-Out Cross-Validation(LOOCV)and parameter tuning were applied to optimizing hyper-parameters for the four(BPNN,SVR,RF,and KNN)machine learning models.An interpretable prediction model based on machine learning and SHapley Additive exPlanations(SHAP),as well as an analytical method combining the SHAP model and the Pearson Correlation Coefficient(PCC),were proposed.The results showed that among the four models,the SVR model was able to simultaneously and accurately predict the ultimate tensile strength(UTS)and elongation(EL).According to the combination analysis of PCC and the magnesium alloy rolling forming mechanism,it was found that strain rate and reduction displayed a negative and positive correlation with UTS,respectively,while rolling temperature and reduction illustrated a positive and negative correlation with EL,respectively.Through the SHAP method,which could interpret the output results of the SVR machine learning model,it was deduced that reduction and strain rate played an important role in the SVR model of the outputs of the UTS and EL,respectively.Combining SHAP with PCC,it was found that strain rate and reduction had a greater influence on the UTS than rolling temperature,whereas strain rate and rolling temperature had more influence on the EL compared to reduction.
文摘Inverse design of advanced materials represents a pivotal challenge in materials science.Leveraging the latent space of Variational Autoencoders(VAEs)for material optimization has emerged as a significant advancement in the field of material inverse design.However,VAEs are inherently prone to generating blurred images,posing challenges for precise inverse design and microstructure manufacturing.While increasing the dimensionality of the VAE latent space can mitigate reconstruction blurriness to some extent,it simultaneously imposes a substantial burden on target optimization due to an excessively high search space.To address these limitations,this study adopts a Variational Autoencoder guided Conditional Diffusion Generative Model(VAE-CDGM)framework integrated with Bayesian optimization to achieve the inverse design of composite materials with targeted mechanical properties.The VAE-CDGM model synergizes the strengths of VAEs and Denoising Diffusion Probabilistic Models(DDPM),enabling the generation of high-quality,sharp images while preserving a manipulable latent space.To accommodate varying dimensional requirements of the latent space,two optimization strategies are proposed.When the latent space dimensionality is excessively high,SHapley Additive exPlanations(SHAP)sensitivity analysis is employed to identify critical latent features for optimization within a reduced subspace.Conversely,direct optimization is performed in the low-dimensional latent space of VAE-CDGM when dimensionality is modest.The results demonstrate that both strategies accurately achieve the targeted design of composite materials while circumventing the blurred reconstruction flaws of VAEs,which offers a novel pathway for the precise design of advanced materials.
基金supported by the National Key R&D Program of China(2023YFE0207900)。
文摘In the arid regions of Northwest China,vegetation cover plays a crucial role in maintaining unique terrestrial ecosystems.Vegetation growth is highly sensitive to variations in topographical factors,and the influence of topography on vegetation cover has attracted increasing attention.This study analyzed vegetation dynamics and their relationship with topography in the Tianshan Mountains of China using Landsat Normalized Difference Vegetation Index(NDVI)data during 2000–2022 and Shuttle Radar Topography Mission(SRTM)-derived topographical factors(elevation,slope,and aspect).Theil-Sen slope estimation and Mann-Kendall trend tests were applied to quantify temporal changes in vegetation,while a terrain area correction coefficient(K)was used to assess spatial associations of vegetation with topography.Random Forest(RF)regression and SHapley Additive exPlanations(SHAP)analysis evaluated the relative importance of topographical factors in shaping vegetation cover(multi-year mean NDVI)distribution.Key findings included that over the 23-a period,59.46%of the vegetated area exhibited significant improvement(P<0.05),with the southern Tianshan Mountains showing the most pronounced increase(70.59%),whereas vegetation degradation(3.10%)was primarily concentrated in river valleys with intensive human activities.RF-SHAP analysis revealed that elevation is the primary driver of vegetation cover patterns,explaining 52.00%of the NDVI variation.The peak NDVI(0.42)occurred at elevations between 2800 and 3200 m.Slope and aspect also significantly influenced vegetation distribution,and higher NDVI values and greater improvement trends were observed on shady(north-facing)slopes compared to sunny(south-facing)slopes.K-index analysis indicated pronounced vegetation change—both degradation and improvement—in areas with elevations between 1100 and 2800 m and slopes exceeding 5°,particularly on sunny slopes.Low-elevation desert areas in the southern Tianshan Mountains were highly susceptible to degradation.This study underscores the critical role of topography in regulating vegetation cover and its spatiotemporal dynamics,providing a scientific basis for sustainable management of arid mountain ecosystems.
基金supported by the Natural Science Foundation of Xinjiang Uygur Autonomous Region(2022D01B109)the Tianchi Doctoral Program of Xinjiang Uygur Autonomous Region(BS2021007).
文摘Within the context of global climate change and rapid urbanization,increasing urban resilience(UR)is especially important in the arid region of Northwest China(ANC),where fragile ecosystems and an uneven water distribution create significant sustainability challenges.In this study,a coupled UR-water ecosystem services(WESs)framework was developed on the basis of 1-km resolution remote sensing data for the 2000–2020 period obtained from the Landsat series,Defense Meteorological Satellite Program(DMSP)/Operational Linescan System(OLS),and Global Precipitation Measurement(GPM),among other sources.Within the framework,the Integrated Valuation of Ecosystem Services and Tradeoffs(InVEST)model was incorporated to provide a WES indicator system.Moreover,entropy weighting was employed to quantify both UR and WES indicators;the coupling coordination degree(CCD)model was used to measure the coupled relationship between UR and WESs;an extreme gradient boosting(XGBoost)-SHapley Additive exPlanations(SHAP)interpretation approach was adopted to identify key drivers and underlying mechanisms;and Geographically Weighted Regression(GWR)was applied to capture spatial distribution characteristics of major driving factors.The results indicated that UR steadily increased from 4.60×10^(-3) to 10.24×10^(-3),whereas WESs followed an inverted V-shaped trend,with a peak value observed in 2010(11.84×10^(-3)).The CCD remained consistently low(mean:0.0166–0.0246)and exhibited considerable spatial heterogeneity.Notably,the degree of coordination was greater in the oasis and mountain core areas but significantly lower at desert areas.XGBoost-SHAP model analysis revealed six key drivers influencing various states of the CCD between UR and WESs systems.The contributions of these factors could be ranked as follows:water yield(WY;24.30%)>farmland area per capita(FP;21.10%)>gross domestic product(GDP)per capita(GDPC;19.00%)>soil retention(SR;14.90%)>population density(PD;5.42%)>water purification(WP;4.40%).In contrast,in UR system,WP(53.66%)and SR(31.62%)served as the dominant drivers.Moreover,the dominant drivers shifted from a combination of natural and socioeconomic factors in StateⅠ(sustainable high resilience)to primarily socioeconomic factors in StateⅢ(unsustainable low resilience).SR and WP exerted positive moderating effects,whereas socioeconomic factors such as GDPC and PD exerted inhibitory effects on the coordination relationship.This research highlights that UR in the ANC region is limited mainly by water scarcity,weak feedback loops,and spatial variability,emphasizing the need for tailored intervention strategies.
基金supported by the Natural Science Foundation of Xinjiang Uygur Autonomous Region(2023E01006,2024TSYCCX0004).
文摘Arid mountain ecosystems are highly sensitive to hydrothermal stress and land use intensification,yet where net primary productivity(NPP)degradation is likely to persist and what drives it remain unclear in the Tianshan Mountains of Northwest China.We integrated multi-source remote sensing with the Carnegie–Ames–Stanford Approach(CASA)model to estimate NPP during 2000–2020,assessed trend persistence using the Hurst exponent,and identified key drivers and nonlinear thresholds with Extreme Gradient Boosting(XGBoost)and SHapley Additive exPlanations(SHAP).Total NPP averaged 55.74 Tg C/a and ranged from 48.07 to 65.91 Tg C/a from 2000 to 2020,while regional mean NPP rose from 138.97 to 160.69 g C/(m^(2)·a).Land use transfer analysis showed that grassland expanded mainly at the expense of unutilized land and that cropland increased overall.Although NPP increased across 64.11%of the region during 2000–2020,persistence analysis suggested that 53.93%of the Tianshan Mountains was prone to continued NPP decline,including 36.41%with significant projected decline and 17.52%with weak projected decline;these areas formed degradation hotspots concentrated in the central and northern Tianshan Mountains.In contrast,potential improvement was limited(strong persistent improvement:4.97%;strong anti-persistent improvement:0.36%).Driver attribution indicated that land use dominated NPP variability(mean absolute SHAP value=29.54%),followed by precipitation(16.03%)and temperature(11.05%).SHAP dependence analyses showed that precipitation effects stabilized at 300.00–400.00 mm,and temperature exhibited an inverted U-shaped response with a peak near 0.00°C.These findings indicated that persistent degradation risk arose from hydrothermal constraints interacting with land use conversion,highlighting the need for threshold-informed,spatially targeted management to sustain carbon sequestration in arid mountain ecosystems.