The advantages of genome selection(GS) in animal and plant breeding are self-evident.Traditional parametric models have disadvantage in better fit the increasingly large sequencing data and capture complex effects acc...The advantages of genome selection(GS) in animal and plant breeding are self-evident.Traditional parametric models have disadvantage in better fit the increasingly large sequencing data and capture complex effects accurately.Machine learning models have demonstrated remarkable potential in addressing these challenges.In this study,we introduced the concept of mixed kernel functions to explore the performance of support vector machine regression(SVR) in GS.Six single kernel functions(SVR_L,SVR_C,SVR_G,SVR_P,SVR_S,SVR_L) and four mixed kernel functions(SVR_GS,SVR_GP,SVR_LS,SVR_LP) were used to predict genome breeding values.The prediction accuracy,mean squared error(MSE) and mean absolute error(MAE) were used as evaluation indicators to compare with two traditional parametric models(GBLUP,BayesB) and two popular machine learning models(RF,KcRR).The results indicate that in most cases,the performance of the mixed kernel function model significantly outperforms that of GBLUP,BayesB and single kernel function.For instance,for T1 in the pig dataset,the predictive accuracy of SVR_GS is improved by 10% compared to GBLUP,and by approximately 4.4 and 18.6% compared to SVR_G and SVR_S respectively.For E1 in the wheat dataset,SVR_GS achieves 13.3% higher prediction accuracy than GBLUP.Among single kernel functions,the Laplacian and Gaussian kernel functions yield similar results,with the Gaussian kernel function performing better.The mixed kernel function notably reduces the MSE and MAE when compared to all single kernel functions.Furthermore,regarding runtime,SVR_GS and SVR_GP mixed kernel functions run approximately three times faster than GBLUP in the pig dataset,with only a slight increase in runtime compared to the single kernel function model.In summary,the mixed kernel function model of SVR demonstrates speed and accuracy competitiveness,and the model such as SVR_GS has important application potential for GS.展开更多
The uplift resistance of the soil overlying shield tunnels significantly impacts their anti-floating stability.However,research on uplift resistance concerning special-shaped shield tunnels is limited.This study combi...The uplift resistance of the soil overlying shield tunnels significantly impacts their anti-floating stability.However,research on uplift resistance concerning special-shaped shield tunnels is limited.This study combines numerical simulation with machine learning techniques to explore this issue.It presents a summary of special-shaped tunnel geometries and introduces a shape coefficient.Through the finite element software,Plaxis3D,the study simulates six key parameters—shape coefficient,burial depth ratio,tunnel’s longest horizontal length,internal friction angle,cohesion,and soil submerged bulk density—that impact uplift resistance across different conditions.Employing XGBoost and ANN methods,the feature importance of each parameter was analyzed based on the numerical simulation results.The findings demonstrate that a tunnel shape more closely resembling a circle leads to reduced uplift resistance in the overlying soil,whereas other parameters exhibit the contrary effects.Furthermore,the study reveals a diminishing trend in the feature importance of buried depth ratio,internal friction angle,tunnel longest horizontal length,cohesion,soil submerged bulk density,and shape coefficient in influencing uplift resistance.展开更多
Cyclohexene is an important raw material in the production of nylon.Selective hydrogenation of benzene is a key method for preparing cyclohexene.However,the Ru catalysts used in current industrial processes still face...Cyclohexene is an important raw material in the production of nylon.Selective hydrogenation of benzene is a key method for preparing cyclohexene.However,the Ru catalysts used in current industrial processes still face challenges,including high metal usage,high process costs,and low cyclohexene yield.This study utilizes existing literature data combined with machine learning methods to analyze the factors influencing benzene conversion,cyclohexene selectivity,and yield in the benzene hydrogenation to cyclohexene reaction.It constructs predictive models based on XGBoost and Random Forest algorithms.After analysis,it was found that reaction time,Ru content,and space velocity are key factors influencing cyclohexene yield,selectivity,and benzene conversion.Shapley Additive Explanations(SHAP)analysis and feature importance analysis further revealed the contribution of each variable to the reaction outcomes.Additionally,we randomly generated one million variable combinations using the Dirichlet distribution to attempt to predict high-yield catalyst formulations.This paper provides new insights into the application of machine learning in heterogeneous catalysis and offers some reference for further research.展开更多
In the pharmaceutical field,machine learning can play an important role in drug development,production and treatment.Co-crystallization techniques have shown promising potential to enhance the properties of active pha...In the pharmaceutical field,machine learning can play an important role in drug development,production and treatment.Co-crystallization techniques have shown promising potential to enhance the properties of active pharmaceutical ingredients(APIs)such as solubility,permeability,and bioavailability,all without altering their chemical structure.This approach opens new avenues for developing natural products into effective drugs,especially those previously challenging in formulation.Emodin,an anthraquinone-based natural product,is a notable example due to its diverse biological activities;however,its physicochemical limitations,such as poor solubility and easy sublimation,restricted its clinical application.While various methods have improved emodin's physicochemical properties,research on its bioavailability remains limited.In our study,we summarize cocrystals and salts produced through co-crystallization technology and identify piperazine as a favorable coformer.Conflicting conclusions from computational chemistry and molecular modeling method and machine learning method regarding the formation of an emodin-piperazine cocrystal or salt led us to experimentally validate these possibilities.Ultimately,we successfully obtained the emodin-piperazine cocrystal,which were characterized and evaluated by several in vitro methods and pharmacokinetic studies.In addition,experiments have shown that emodin has a certain therapeutic effect on sepsis,so we also evaluated emodin-piperazine biological activity in a sepsis model.The results demonstrate that co-crystallization significantly enhances emodin's solubility,permeability,and bioavailability.Pharmacodynamic studies indicate that the emodin-piperazine cocrystal improves sepsis symptoms and provides protective effects against liver and kidney damage associated with sepsis.This study offers renewed hope for natural products with broad biological activities yet hindered by physicochemical limitations by advancing co-crystallization as a viable development approach.展开更多
Accurate assessment of snowpack volumetric liquid water content and bulk density is essential for understanding snow hydrology,avalanche risk management,and monitoring cryosphere changes.This study presents a novel du...Accurate assessment of snowpack volumetric liquid water content and bulk density is essential for understanding snow hydrology,avalanche risk management,and monitoring cryosphere changes.This study presents a novel dual-parameter inversion framework that integrates synthetic electromagnetic modelling,dimensionality reduction,and machine learning algorithms to extract relative permittivity and log-resistivity from ground-penetrating radar(GPR)data.Traditional snowpack measurements are invasive,labor-intensive,and limited to point observations.To overcome these limitations,we developed a non-invasive,scalable,and data-driven framework that uses synthetic GPR datasets representing diverse snowpack conditions with variable moisture and density profiles.Synthetic 1D time series reflections(A-scans)are generated using finite-difference time-domain simulations in the state-of-the-art electromagnetic simulator gprMax.Principal component analysis(PCA)is applied to compress each A-scan while preserving key features,which significantly improved and enhanced the model training efficiency.Four machine learning models,including random forest,neural network,support vector machine,and eXtreme gradient boosting,are trained on PCA-reduced features.Among these,the neural network model achieved the best performance,with R^(2)>0.97 for permittivity and R 2>0.92 for resistivity.Gaussian noise(signal-to-noise ratio of 6 dB)is introduced to the synthetic data,and then targeted domain adaptation is employed to enhance generalization to field data.The framework is validated on two contrasting GPR transects in the Altay Mountains of the Chinese mainland,representing moist(T750)and wet(G125)snowpack conditions.The neural network model predictions are most consistent with the GPR derived estimates,Snowfork measurements,and snow pit data,achieving volumetric liquid water content deviation of≤1.5% and bulk density error within the range of 30-84 kg m^(-3).The results demonstrate that machine learning-based inversion,supported by realistic simulations and data augmentation enables scalable,non-invasive snowpack characterization with significant applications in hydrological forecasting,snow monitoring,and water resource management.展开更多
Machine learning(ML)is recognized as a potent tool for the inverse design of environmental functional material,particularly for complex entities like biochar-based catalysts(BCs).Thus,the tailored BCs can have a disti...Machine learning(ML)is recognized as a potent tool for the inverse design of environmental functional material,particularly for complex entities like biochar-based catalysts(BCs).Thus,the tailored BCs can have a distinct ability to trigger the nonradical pathway in advance oxidation processes(AOPs),promising a stable,rapid and selective degradation of persistent contaminants.However,due to the inherent“black box”nature and limitations of input features,results and conclusions derived from ML may not always be intuitively understood or comprehensively validated.To tackle this challenge,we linked the front-point interpretable analysis approaches with back-point density functional theory(DFT)calculations to form a chained learning strategy for deeper sight into the intrinsic activation mechanism of BCs in AOPs.At the front point,we conducted an easy-to-interpret meta-analysis to validate two strategies for enhancing nonradical pathways by increasing oxygen content and specific surface area(SSA),and prepared oxidized biochar(OBC500)and SSA-increased biochar(SBC900)by controlling pyrolysis conditions and modification methods.Subsequently,experimental results showed that OBC500 and SBC900 had distinct dominant degradation pathways for 1O2 generation and electron transfer,respectively.Finally,at the end point,DFT calculations revealed their active sites and degradation mechanisms.This chained learning strategy elucidates fundamental principles for BC inverse design and showcases the exceptional capacity to integrate computational techniques to accelerate catalyst inverse design.展开更多
Nasopharyngeal carcinoma(NPC)is a malignant tumor prevalent in southern China and Southeast Asia,where its early detection is crucial for improving patient prognosis and reducing mortality rates.However,existing scree...Nasopharyngeal carcinoma(NPC)is a malignant tumor prevalent in southern China and Southeast Asia,where its early detection is crucial for improving patient prognosis and reducing mortality rates.However,existing screening methods suffer from limitations in accuracy and accessibility,hindering their application in large-scale population screening.In this work,a surface-enhanced Raman spectroscopy(SERS)-based method was established to explore the profiles of different stratified components in saliva from NPC and healthy subjects after fractionation processing.The study findings indicate that all fractionated samples exhibit diseaseassociated molecular signaling differences,where small-molecule(molecular weight cut-offvalue is 10 kDa)demonstrating superior classification capabilities with sensitivity of 90.5%and speci-ficity of 75.6%,area under receiver operating characteristic(ROC)curve of 0:925±0:031.The primary objective of this study was to qualitatively explore patterns in saliva composition across groups.The proposed SERS detection strategy for fractionated saliva offers novel insights for enhancing the sensitivity and reliability of noninvasive NPC screening,laying the foundation for translational application in large-scale clinical settings.展开更多
Accurate land surface temperature(LST)assessment is crucial for comprehending and reducing the impacts of climate change and understanding land use evolution.This study presents an innovative method by utilizing ensem...Accurate land surface temperature(LST)assessment is crucial for comprehending and reducing the impacts of climate change and understanding land use evolution.This study presents an innovative method by utilizing ensemble models,advanced correlation analysis,and trend analysis to investigate its environmental influences.Google Earth Engine(GEE)was utilized to process the datasets from Landsat-7 and Landsat-8 for the five big cities of Punjab,Pakistan,from 2001 to 2023.Results from this study show significant urban warming trends,and a strong correlation between environmental variables and LST was identified.The ensemble-based three machine learning models,including XGBoost,AdaBoost,and random forest(RF),were adopted to improve the accuracy of LST evaluation.Although XGBoost and AdaBoost attained modest levels of accuracy,with R^(2) values of 0.767 and 0.706,respectively,the RF model outperformed them by achieving an exceptional R^(2) of 0.796 and RMSE of 0.476.Moreover,Pearson correlation analysis revealed a negative relationship between LST and normalized difference latent heat index(NDLI)with r=-0.67,normalized difference vegetation index(NDVI)with r=-0.6,and modified normalized difference water index(MNDWI)with the value of r as -0.57.In addition,wavelet analysis showed that vegetation and water offer long-term LST cooling,lasting up to 64 months,while built-up areas and bare soil contribute to short-term warming,lasting 4 to 8 months.Latent heat indicated variable cooling periods,surpassing 60 months in cities.These findings enhance the understanding of LST changes and the impact of climate change on the environment.展开更多
Artificial intelligence(AI)based models have been used to predict the structural,optical,mechanical,and electrochemical properties of zinc oxide/graphene oxide nanocomposites.Machine learning(ML)models such as Artific...Artificial intelligence(AI)based models have been used to predict the structural,optical,mechanical,and electrochemical properties of zinc oxide/graphene oxide nanocomposites.Machine learning(ML)models such as Artificial Neural Networks(ANN),Support Vector Regression(SVR),Multilayer Perceptron(MLP),and hybrid,along with fuzzy logic tools,were applied to predict the different properties like wavelength at maximum intensity(444 nm),crystallite size(17.50 nm),and optical bandgap(2.85 eV).While some other properties,such as energy density,power density,and charge transfer resistance,were also predicted with the help of datasets of 1000(80:20).In general,the energy parameters were predicted more accurately by hybrid models.The hydrothermal method was used to synthesize graphene oxide(GO)and zinc oxide(ZnO)nanocomposites.The increased surface area,conductivity,and stability of graphene oxide in zinc oxide nanoparticles make the composite an ideal option for energy storage.X-ray diffraction(XRD)confirmed the crystallite size of 17.41 nm for the nanocomposite and the presence of GO(12.8○)peaks.The scanning electron microscope(SEM)showed anchored wrinkled GO sheets on zinc oxide with an average particle size of 2.93μm.Energy-dispersive X-ray spectroscopy(EDX)confirmed the elemental composition,and Fouriertransform infrared spectroscopy(FTIR)revealed the impact of GO on functional groups and electrochemical behavior.Photoluminescence(PL)wavelength of(439 nm)and band gap of(2.81 eV)show that the material is suitable for energy applications in nanocomposites.Smart nanocomposite materials with improved performance in energy storage and related applications were fabricated by combining synthesis,characterization,fuzzy logic,and machine learning in this work.展开更多
Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness a...Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively.This study introduces an advanced,explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets,which reflects real-world network behavior through a blend of normal and diverse attack classes.The methodology begins with sophisticated data preprocessing,incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions,ensuring standardized and model-ready inputs.Critical dimensionality reduction is achieved via the Harris Hawks Optimization(HHO)algorithm—a nature-inspired metaheuristic modeled on hawks’hunting strategies.HHO efficiently identifies the most informative features by optimizing a fitness function based on classification performance.Following feature selection,the SMOTE is applied to the training data to resolve class imbalance by synthetically augmenting underrepresented attack types.The stacked architecture is then employed,combining the strengths of XGBoost,SVM,and RF as base learners.This layered approach improves prediction robustness and generalization by balancing bias and variance across diverse classifiers.The model was evaluated using standard classification metrics:precision,recall,F1-score,and overall accuracy.The best overall performance was recorded with an accuracy of 99.44%for UNSW-NB15,demonstrating the model’s effectiveness.After balancing,the model demonstrated a clear improvement in detecting the attacks.We tested the model on four datasets to show the effectiveness of the proposed approach and performed the ablation study to check the effect of each parameter.Also,the proposed model is computationaly efficient.To support transparency and trust in decision-making,explainable AI(XAI)techniques are incorporated that provides both global and local insight into feature contributions,and offers intuitive visualizations for individual predictions.This makes it suitable for practical deployment in cybersecurity environments that demand both precision and accountability.展开更多
BACKGROUND:This study aims to develop and validate a machine learning-based in-hospital mortality predictive model for acute aortic syndrome(AAS)in the emergency department(ED)and to derive a simplifi ed version suita...BACKGROUND:This study aims to develop and validate a machine learning-based in-hospital mortality predictive model for acute aortic syndrome(AAS)in the emergency department(ED)and to derive a simplifi ed version suitable for rapid clinical application.METHODS:In this multi-center retrospective cohort study,AAS patient data from three hospitals were analyzed.The modeling cohort included data from the First Affiliated Hospital of Zhengzhou University and the People’s Hospital of Xinjiang Uygur Autonomous Region,with Peking University Third Hospital data serving as the external test set.Four machine learning algorithms—logistic regression(LR),multilayer perceptron(MLP),Gaussian naive Bayes(GNB),and random forest(RF)—were used to develop predictive models based on 34 early-accessible clinical variables.A simplifi ed model was then derived based on fi ve key variables(Stanford type,pericardial eff usion,asymmetric peripheral arterial pulsation,decreased bowel sounds,and dyspnea)via Least Absolute Shrinkage and Selection Operator(LASSO)regression to improve ED applicability.RESULTS:A total of 929 patients were included in the modeling cohort,and 210 were included in the external test set.Four machine learning models based on 34 clinical variables were developed,achieving internal and external validation AUCs of 0.85-0.90 and 0.73-0.85,respectively.The simplifi ed model incorporating fi ve key variables demonstrated internal and external validation AUCs of 0.71-0.86 and 0.75-0.78,respectively.Both models showed robust calibration and predictive stability across datasets.CONCLUSION:Both kinds of models were built based on machine learning tools,and proved to have certain prediction performance and extrapolation.展开更多
Providing safe and quality food is crucial for every household and is of extreme significance in the growth of any society.It is a complex procedure that deals with all issues focusing on the development of food proce...Providing safe and quality food is crucial for every household and is of extreme significance in the growth of any society.It is a complex procedure that deals with all issues focusing on the development of food processing from seed to harvest,storage,preparation,and consumption.This current paper seeks to demystify the importance of artificial intelligence,machine learning(ML),deep learning(DL),and computer vision(CV)in ensuring food safety and quality.By stressing the importance of these technologies,the audience will feel reassured and confident in their potential.These are very handy for such problems,giving assurance over food safety.CV is incredibly noble in today's generation because it improves food processing quality and positively impacts firms and researchers.Thus,at the present production stage,rich in image processing and computer visioning is incorporated into all facets of food production.In this field,DL and ML are implemented to identify the type of food in addition to quality.Concerning data and result-oriented perceptions,one has found similarities regarding various approaches.As a result,the findings of this study will be helpful for scholars looking for a proper approach to identify the quality of food offered.It helps to indicate which food products have been discussed by other scholars and lets the reader know papers by other scholars inclined to research further.Also,DL is accurately integrated with identifying the quality and safety of foods in the market.This paper describes the current practices and concerns of ML,DL,and probable trends for its future development.展开更多
Heterogeneous catalysis is a complex,multiscale phenomenon in which reactions occur at dynamically evolving surfaces.A longstanding goal is to probe these processes to distill design rules for novel catalytic material...Heterogeneous catalysis is a complex,multiscale phenomenon in which reactions occur at dynamically evolving surfaces.A longstanding goal is to probe these processes to distill design rules for novel catalytic materials,a capability that is essential to the transition toward a sustainable future[1–3].展开更多
Landslides pose a formidable natural hazard across the Qinghai-Tibet Plateau(QTP),endangering both ecosystems and human life.Identifying the driving factors behind landslides and accurately assessing susceptibility ar...Landslides pose a formidable natural hazard across the Qinghai-Tibet Plateau(QTP),endangering both ecosystems and human life.Identifying the driving factors behind landslides and accurately assessing susceptibility are key to mitigating disaster risk.This study integrated multi-source historical landslide data with 15 predictive factors and used several machine learning models—Random Forest(RF),Gradient Boosting Regression Trees(GBRT),Extreme Gradient Boosting(XGBoost),and Categorical Boosting(CatBoost)—to generate susceptibility maps.The Shapley additive explanation(SHAP)method was applied to quantify factor importance and explore their nonlinear effects.The results showed that:(1)CatBoost was the best-performing model(CA=0.938,AUC=0.980)in assessing landslide susceptibility,with altitude emerging as the most significant factor,followed by distance to roads and earthquake sites,precipitation,and slope;(2)the SHAP method revealed critical nonlinear thresholds,demonstrating that historical landslides were concentrated at mid-altitudes(1400-4000 m)and decreased markedly above 4000 m,with a parallel reduction in probability beyond 700 m from roads;and(3)landslide-prone areas,comprising 13%of the QTP,were concentrated in the southeastern and northeastern parts of the plateau.By integrating machine learning and SHAP analysis,this study revealed landslide hazard-prone areas and their driving factors,providing insights to support disaster management strategies and sustainable regional planning.展开更多
Accurate prediction of concrete compressive strength is fundamental for optimizing mix designs,improving material utilization,and ensuring structural safety in modern construction.Traditional empirical methods often f...Accurate prediction of concrete compressive strength is fundamental for optimizing mix designs,improving material utilization,and ensuring structural safety in modern construction.Traditional empirical methods often fail to capture the non-linear relationships among concrete constituents,especially with the growing use of supple-mentary cementitious materials and recycled aggregates.This study presents an integrated machine learning framework for concrete strength prediction,combining advanced regression models—namely CatBoost—with metaheuristic optimization algorithms,with a particular focus on the Somersaulting Spider Optimizer(SSO).A comprehensive dataset encompassing diverse mix proportions and material types was used to evaluate baseline machine learning models,including CatBoost,XGBoost,ExtraTrees,and RandomForest.Among these,CatBoost demonstrated superior accuracy across multiple performance metrics.To further enhance predictive capability,several bio-inspired optimizers were employed for hyperparameter tuning.The SSO-CatBoost hybrid achieved the lowest mean squared error and highest correlation coefficients,outperforming other metaheuristic approaches such as Genetic Algorithm,Particle Swarm Optimization,and Grey Wolf Optimizer.Statistical significance was established through Analysis of Variance and Wilcoxon signed-rank testing,confirming the robustness of the optimized models.The proposed methodology not only delivers improved predictive performance but also offers a transparent framework for mix design optimization,supporting data-driven decision making in sustainable and resilient infrastructure development.展开更多
Objective:The increasing global prevalence of mental health disorders highlights the urgent need for the development of innovative diagnostic methods.Conditions such as anxiety,depression,stress,bipolar disorder(BD),a...Objective:The increasing global prevalence of mental health disorders highlights the urgent need for the development of innovative diagnostic methods.Conditions such as anxiety,depression,stress,bipolar disorder(BD),and autism spectrum disorder(ASD)frequently arise from the complex interplay of demographic,biological,and socioeconomic factors,resulting in aggravated symptoms.This review investigates machine intelligence approaches for the early detection and prediction of mental health conditions.Methods:The preferred reporting items for systematic reviews and meta-analyses(PRISMA)framework was employed to conduct a systematic review and analysis covering the period 2018 to 2025.The potential impact of machine intelligence methods was assessed by considering various strategies,hybridization of algorithms,tools,techniques,and datasets,and their applicability.Results:Through a systematic review of studies concentrating on the prediction and evaluation of mental disorders using machine intelligence algorithms,advancements,limitations,and gaps in current methodologies were highlighted.The datasets and tools utilized in these investigations were examined,offering a detailed overview of the status of computational models in understanding and diagnosing mental health disorders.Recent research indicated considerable improvements in diagnostic accuracy and treatment effectiveness,particularly for depression and anxiety,which have shown the greatest methodological diversity and notable advancements in machine intelligence.Conclusions:Despite these improvements,challenges persist,including the need for more diverse datasets,ethical issues surrounding data privacy and algorithmic bias,and obstacles to integrating these technologies into clinical settings.This synthesis emphasizes the transformative potential of machine intelligence in enhancing mental healthcare.展开更多
In this paper,we propose a systematic approach for accelerating finite element-type methods by machine learning for the numerical solution of partial differential equations(PDEs).The main idea is to use a neural netwo...In this paper,we propose a systematic approach for accelerating finite element-type methods by machine learning for the numerical solution of partial differential equations(PDEs).The main idea is to use a neural network to learn the solution map of the PDEs and to do so in an element-wise fashion.This map takes input of the element geometry and the PDE’s parameters on that element,and gives output of two operators:(1)the in2out operator for inter-element communication,and(2)the in2sol operator(Green’s function)for element-wise solution recovery.A significant advantage of this approach is that,once trained,this network can be used for the numerical solution of the PDE for any domain geometry and any parameter distribution without retraining.Also,the training is significantly simpler since it is done on the element level instead on the entire domain.We call this approach element learning.This method is closely related to hybridizable discontinuous Galerkin(HDG)methods in the sense that the local solvers of HDG are replaced by machine learning approaches.Numerical tests are presented for an example PDE,the radiative transfer or radiation transport equation,in a variety of scenarios with idealized or realistic cloud fields,with smooth or sharp gradient in the cloud boundary transition.Under a fixed accuracy level of 10^(−3) in the relative L^(2) error,and polynomial degree p=6 in each element,we observe an approximately 5 to 10 times speed-up by element learning compared to a classical finite element-type method.展开更多
The complex interactions and conflicting performance demands in multi-component composites pose significant challenges for achieving balanced multi-property optimization through conventional trial-and-error approaches...The complex interactions and conflicting performance demands in multi-component composites pose significant challenges for achieving balanced multi-property optimization through conventional trial-and-error approaches.Machine learning(ML)offers a promising solution,markedly improving materials discovery efficiency.However,the high dimensionality of feature spaces in such systems has long impeded effective ML-driven feature representation and inverse design.To overcome this,we present an Intelligent Screening System(ISS)framework to accelerate the discovery of optimal formulations balancing four key properties in 15-component PTFE-based copper-clad laminate composites(PTFE-CCLCs).ISS adopts modular descriptors based on the physical information of component volume fractions,thereby simplifying the feature representation.By leveraging the inverse prediction capability of ML models and constructing a performance-driven virtual candidate database,ISS significantly reduced the computational complexity associated with high-dimensional spaces.Experimental validation confirmed that ISSoptimized formulations exhibited superior synergy,notably resolving the trade-off between thermal conductivity and peel strength,and outperform many commercial counterparts.Despite limited data and inherent process variability,ISS achieved an average prediction accuracy of 76.5%,with thermal conductivity predictions exceeding 90%,demonstrating robust reliability.This work provides an innovative,efficient strategy for multifunctional optimization and accelerated discovery in ultra-complex composite systems,highlighting the integration of ML and advanced materials design.展开更多
Accurate prediction of rockburst intensity levels is crucial for ensuring the safety of deep hard rock engineering construction.This paper introduced an expert system for rockburst intensity level prediction that empl...Accurate prediction of rockburst intensity levels is crucial for ensuring the safety of deep hard rock engineering construction.This paper introduced an expert system for rockburst intensity level prediction that employs machine learning algorithms as the basis for its inference rules.The system comprises four modules:a database,a repository,an inference engine,and an interpreter.A database containing 1114 rockburst cases was used to construct 357 datasets that serve as the repository for the expert system.Additionally,19 types of machine learning algorithms were used to establish 6783 micro-models to construct cognitive rules within the inference engine.By integrating probability theory and marginal analysis,a fuzzy scoring method based on the SoftMax function was developed and applied to the interpreter for rockburst intensity level prediction,effectively restoring the continuity of rockburst characteristics.The research results indicate that ensemble algorithms based on decision trees are more effective in capturing the characteristics of rockburst.Key factors for accurate prediction of rockburst intensity include uniaxial compressive strength,elastic energy index,the maximum principal stress,tangential stress,and their composite indicators.The accuracy of the proposed rockburst intensity level prediction expert system was verified using 20 engineering rockburst cases,with predictions aligning closely with the actual rockburst intensity levels.展开更多
Post-kidney transplant rejection is a critical factor influencing transplant success rates and the survival of transplanted organs.With the rapid advancement of artificial intelligence technologies,machine learning(ML...Post-kidney transplant rejection is a critical factor influencing transplant success rates and the survival of transplanted organs.With the rapid advancement of artificial intelligence technologies,machine learning(ML)has emerged as a powerful data analysis tool,widely applied in the prediction,diagnosis,and mechanistic study of kidney transplant rejection.This mini-review systematically summarizes the recent applications of ML techniques in post-kidney transplant rejection,covering areas such as the construction of predictive models,identification of biomarkers,analysis of pathological images,assessment of immune cell infiltration,and formulation of personalized treatment strategies.By integrating multi-omics data and clinical information,ML has significantly enhanced the accuracy of early rejection diagnosis and the capability for prognostic evaluation,driving the development of precision medicine in the field of kidney transplantation.Furthermore,this article discusses the challenges faced in existing research and potential future directions,providing a theoretical basis and technical references for related studies.展开更多
基金supported by the China Agriculture Research System of MOF and MARAthe National Natural Science Foundation of China (31872337 and 31501919)the Agricultural Science and Technology Innovation Project,China (ASTIP-IAS02)。
文摘The advantages of genome selection(GS) in animal and plant breeding are self-evident.Traditional parametric models have disadvantage in better fit the increasingly large sequencing data and capture complex effects accurately.Machine learning models have demonstrated remarkable potential in addressing these challenges.In this study,we introduced the concept of mixed kernel functions to explore the performance of support vector machine regression(SVR) in GS.Six single kernel functions(SVR_L,SVR_C,SVR_G,SVR_P,SVR_S,SVR_L) and four mixed kernel functions(SVR_GS,SVR_GP,SVR_LS,SVR_LP) were used to predict genome breeding values.The prediction accuracy,mean squared error(MSE) and mean absolute error(MAE) were used as evaluation indicators to compare with two traditional parametric models(GBLUP,BayesB) and two popular machine learning models(RF,KcRR).The results indicate that in most cases,the performance of the mixed kernel function model significantly outperforms that of GBLUP,BayesB and single kernel function.For instance,for T1 in the pig dataset,the predictive accuracy of SVR_GS is improved by 10% compared to GBLUP,and by approximately 4.4 and 18.6% compared to SVR_G and SVR_S respectively.For E1 in the wheat dataset,SVR_GS achieves 13.3% higher prediction accuracy than GBLUP.Among single kernel functions,the Laplacian and Gaussian kernel functions yield similar results,with the Gaussian kernel function performing better.The mixed kernel function notably reduces the MSE and MAE when compared to all single kernel functions.Furthermore,regarding runtime,SVR_GS and SVR_GP mixed kernel functions run approximately three times faster than GBLUP in the pig dataset,with only a slight increase in runtime compared to the single kernel function model.In summary,the mixed kernel function model of SVR demonstrates speed and accuracy competitiveness,and the model such as SVR_GS has important application potential for GS.
基金Guangzhou Metro Scientific Research Project(No.JT204-100111-23001)Chongqing Municipal Special Project for Technological Innovation and Application Development(No.CSTB2022TIAD-KPX0101)Science and Technology Research and Development Program of China State Railway Group Co.,Ltd.(No.N2023G045)。
文摘The uplift resistance of the soil overlying shield tunnels significantly impacts their anti-floating stability.However,research on uplift resistance concerning special-shaped shield tunnels is limited.This study combines numerical simulation with machine learning techniques to explore this issue.It presents a summary of special-shaped tunnel geometries and introduces a shape coefficient.Through the finite element software,Plaxis3D,the study simulates six key parameters—shape coefficient,burial depth ratio,tunnel’s longest horizontal length,internal friction angle,cohesion,and soil submerged bulk density—that impact uplift resistance across different conditions.Employing XGBoost and ANN methods,the feature importance of each parameter was analyzed based on the numerical simulation results.The findings demonstrate that a tunnel shape more closely resembling a circle leads to reduced uplift resistance in the overlying soil,whereas other parameters exhibit the contrary effects.Furthermore,the study reveals a diminishing trend in the feature importance of buried depth ratio,internal friction angle,tunnel longest horizontal length,cohesion,soil submerged bulk density,and shape coefficient in influencing uplift resistance.
基金Supported by CAS Basic and Interdisciplinary Frontier Scientific Research Pilot Project(XDB1190300,XDB1190302)Youth Innovation Promotion Association CAS(Y2021056)+1 种基金Joint Fund of the Yulin University and the Dalian National Laboratory for Clean Energy(YLU-DNL Fund 2022007)The special fund for Science and Technology Innovation Teams of Shanxi Province(202304051001007)。
文摘Cyclohexene is an important raw material in the production of nylon.Selective hydrogenation of benzene is a key method for preparing cyclohexene.However,the Ru catalysts used in current industrial processes still face challenges,including high metal usage,high process costs,and low cyclohexene yield.This study utilizes existing literature data combined with machine learning methods to analyze the factors influencing benzene conversion,cyclohexene selectivity,and yield in the benzene hydrogenation to cyclohexene reaction.It constructs predictive models based on XGBoost and Random Forest algorithms.After analysis,it was found that reaction time,Ru content,and space velocity are key factors influencing cyclohexene yield,selectivity,and benzene conversion.Shapley Additive Explanations(SHAP)analysis and feature importance analysis further revealed the contribution of each variable to the reaction outcomes.Additionally,we randomly generated one million variable combinations using the Dirichlet distribution to attempt to predict high-yield catalyst formulations.This paper provides new insights into the application of machine learning in heterogeneous catalysis and offers some reference for further research.
基金funded by the National Natural Science Foundation of China(No.22278443)CAMS Innovation Fund for Medical Sciences(No.2022-I2M-1-015)+3 种基金the Key R&D Program of Shandong Province(No.2021ZDSYS26)Xinjiang Uygur Autonomous Region Innovation Environment Construction Special Fund and Technology Innovation Base Construction Key Laboratory Open Project(No.2023D04065)2023 Xinjiang Uygur Autonomous Region Innovation Tianchi Talent Introduction Program for financial supportthe Key Project of Natural Science of Bengbu Medical University(No.2024byzd138).
文摘In the pharmaceutical field,machine learning can play an important role in drug development,production and treatment.Co-crystallization techniques have shown promising potential to enhance the properties of active pharmaceutical ingredients(APIs)such as solubility,permeability,and bioavailability,all without altering their chemical structure.This approach opens new avenues for developing natural products into effective drugs,especially those previously challenging in formulation.Emodin,an anthraquinone-based natural product,is a notable example due to its diverse biological activities;however,its physicochemical limitations,such as poor solubility and easy sublimation,restricted its clinical application.While various methods have improved emodin's physicochemical properties,research on its bioavailability remains limited.In our study,we summarize cocrystals and salts produced through co-crystallization technology and identify piperazine as a favorable coformer.Conflicting conclusions from computational chemistry and molecular modeling method and machine learning method regarding the formation of an emodin-piperazine cocrystal or salt led us to experimentally validate these possibilities.Ultimately,we successfully obtained the emodin-piperazine cocrystal,which were characterized and evaluated by several in vitro methods and pharmacokinetic studies.In addition,experiments have shown that emodin has a certain therapeutic effect on sepsis,so we also evaluated emodin-piperazine biological activity in a sepsis model.The results demonstrate that co-crystallization significantly enhances emodin's solubility,permeability,and bioavailability.Pharmacodynamic studies indicate that the emodin-piperazine cocrystal improves sepsis symptoms and provides protective effects against liver and kidney damage associated with sepsis.This study offers renewed hope for natural products with broad biological activities yet hindered by physicochemical limitations by advancing co-crystallization as a viable development approach.
基金supported by the National Key R&D Program of China(Grant Nos.2023YFC3008300&2023YFC3008305)the National Natural Science Foundation of China(Grant No.42172320)+1 种基金the Key Laboratory of Mountain Hazards and Engineering Resilience,Institute of Mountain Hazards and Environment,Chinese Academy of Sciences(Grant Nos.KLMHER-Z06&KLMHER-T07)the Science and Technology Research Program of Institute of Mountain Hazards and Environment,Chinese Academy of Sciences(Grant No.IMHE-CXTD.04).
文摘Accurate assessment of snowpack volumetric liquid water content and bulk density is essential for understanding snow hydrology,avalanche risk management,and monitoring cryosphere changes.This study presents a novel dual-parameter inversion framework that integrates synthetic electromagnetic modelling,dimensionality reduction,and machine learning algorithms to extract relative permittivity and log-resistivity from ground-penetrating radar(GPR)data.Traditional snowpack measurements are invasive,labor-intensive,and limited to point observations.To overcome these limitations,we developed a non-invasive,scalable,and data-driven framework that uses synthetic GPR datasets representing diverse snowpack conditions with variable moisture and density profiles.Synthetic 1D time series reflections(A-scans)are generated using finite-difference time-domain simulations in the state-of-the-art electromagnetic simulator gprMax.Principal component analysis(PCA)is applied to compress each A-scan while preserving key features,which significantly improved and enhanced the model training efficiency.Four machine learning models,including random forest,neural network,support vector machine,and eXtreme gradient boosting,are trained on PCA-reduced features.Among these,the neural network model achieved the best performance,with R^(2)>0.97 for permittivity and R 2>0.92 for resistivity.Gaussian noise(signal-to-noise ratio of 6 dB)is introduced to the synthetic data,and then targeted domain adaptation is employed to enhance generalization to field data.The framework is validated on two contrasting GPR transects in the Altay Mountains of the Chinese mainland,representing moist(T750)and wet(G125)snowpack conditions.The neural network model predictions are most consistent with the GPR derived estimates,Snowfork measurements,and snow pit data,achieving volumetric liquid water content deviation of≤1.5% and bulk density error within the range of 30-84 kg m^(-3).The results demonstrate that machine learning-based inversion,supported by realistic simulations and data augmentation enables scalable,non-invasive snowpack characterization with significant applications in hydrological forecasting,snow monitoring,and water resource management.
基金supported by Project of National and Local Joint Engineering Research Center for Biomass Energy Development and Utilization(Harbin Institute of Technology,No.2021A004).
文摘Machine learning(ML)is recognized as a potent tool for the inverse design of environmental functional material,particularly for complex entities like biochar-based catalysts(BCs).Thus,the tailored BCs can have a distinct ability to trigger the nonradical pathway in advance oxidation processes(AOPs),promising a stable,rapid and selective degradation of persistent contaminants.However,due to the inherent“black box”nature and limitations of input features,results and conclusions derived from ML may not always be intuitively understood or comprehensively validated.To tackle this challenge,we linked the front-point interpretable analysis approaches with back-point density functional theory(DFT)calculations to form a chained learning strategy for deeper sight into the intrinsic activation mechanism of BCs in AOPs.At the front point,we conducted an easy-to-interpret meta-analysis to validate two strategies for enhancing nonradical pathways by increasing oxygen content and specific surface area(SSA),and prepared oxidized biochar(OBC500)and SSA-increased biochar(SBC900)by controlling pyrolysis conditions and modification methods.Subsequently,experimental results showed that OBC500 and SBC900 had distinct dominant degradation pathways for 1O2 generation and electron transfer,respectively.Finally,at the end point,DFT calculations revealed their active sites and degradation mechanisms.This chained learning strategy elucidates fundamental principles for BC inverse design and showcases the exceptional capacity to integrate computational techniques to accelerate catalyst inverse design.
基金financially supported by National Natural Science Foundation ofChina(No.12374405)Provincial Science Foundation for Distinguished Young Scholars of Fujian(No.2024J010024)+1 种基金Natural Science Foundation of Fujian Province of China(No.2023J011267)Major Research Projects for Young and Middle-aged Researchers of Fujian Provincial Health Commission(No.2021ZQNZD010).
文摘Nasopharyngeal carcinoma(NPC)is a malignant tumor prevalent in southern China and Southeast Asia,where its early detection is crucial for improving patient prognosis and reducing mortality rates.However,existing screening methods suffer from limitations in accuracy and accessibility,hindering their application in large-scale population screening.In this work,a surface-enhanced Raman spectroscopy(SERS)-based method was established to explore the profiles of different stratified components in saliva from NPC and healthy subjects after fractionation processing.The study findings indicate that all fractionated samples exhibit diseaseassociated molecular signaling differences,where small-molecule(molecular weight cut-offvalue is 10 kDa)demonstrating superior classification capabilities with sensitivity of 90.5%and speci-ficity of 75.6%,area under receiver operating characteristic(ROC)curve of 0:925±0:031.The primary objective of this study was to qualitatively explore patterns in saliva composition across groups.The proposed SERS detection strategy for fractionated saliva offers novel insights for enhancing the sensitivity and reliability of noninvasive NPC screening,laying the foundation for translational application in large-scale clinical settings.
基金supported by the National Natural Science Foundation of China(Grant Nos.52479045,52279042)the Key Research and Development Program in Guangxi(Grant No.AB23026021)the Open Research Fund of Guangxi Key Laboratory of Water Engineering Materials and Structures,Guangxi Institute of Water Resources Research(Grant No.GXHRIWEMS-2022-07).
文摘Accurate land surface temperature(LST)assessment is crucial for comprehending and reducing the impacts of climate change and understanding land use evolution.This study presents an innovative method by utilizing ensemble models,advanced correlation analysis,and trend analysis to investigate its environmental influences.Google Earth Engine(GEE)was utilized to process the datasets from Landsat-7 and Landsat-8 for the five big cities of Punjab,Pakistan,from 2001 to 2023.Results from this study show significant urban warming trends,and a strong correlation between environmental variables and LST was identified.The ensemble-based three machine learning models,including XGBoost,AdaBoost,and random forest(RF),were adopted to improve the accuracy of LST evaluation.Although XGBoost and AdaBoost attained modest levels of accuracy,with R^(2) values of 0.767 and 0.706,respectively,the RF model outperformed them by achieving an exceptional R^(2) of 0.796 and RMSE of 0.476.Moreover,Pearson correlation analysis revealed a negative relationship between LST and normalized difference latent heat index(NDLI)with r=-0.67,normalized difference vegetation index(NDVI)with r=-0.6,and modified normalized difference water index(MNDWI)with the value of r as -0.57.In addition,wavelet analysis showed that vegetation and water offer long-term LST cooling,lasting up to 64 months,while built-up areas and bare soil contribute to short-term warming,lasting 4 to 8 months.Latent heat indicated variable cooling periods,surpassing 60 months in cities.These findings enhance the understanding of LST changes and the impact of climate change on the environment.
基金extend their gratitude to the Deanship of Scientific Research,Vice Presidency for Graduate Studies and Scientific Research,King Faisal University,Saudi Arabia,for funding the publication of this work under the Ambitious Researcher program(Project No.KFU253806).
文摘Artificial intelligence(AI)based models have been used to predict the structural,optical,mechanical,and electrochemical properties of zinc oxide/graphene oxide nanocomposites.Machine learning(ML)models such as Artificial Neural Networks(ANN),Support Vector Regression(SVR),Multilayer Perceptron(MLP),and hybrid,along with fuzzy logic tools,were applied to predict the different properties like wavelength at maximum intensity(444 nm),crystallite size(17.50 nm),and optical bandgap(2.85 eV).While some other properties,such as energy density,power density,and charge transfer resistance,were also predicted with the help of datasets of 1000(80:20).In general,the energy parameters were predicted more accurately by hybrid models.The hydrothermal method was used to synthesize graphene oxide(GO)and zinc oxide(ZnO)nanocomposites.The increased surface area,conductivity,and stability of graphene oxide in zinc oxide nanoparticles make the composite an ideal option for energy storage.X-ray diffraction(XRD)confirmed the crystallite size of 17.41 nm for the nanocomposite and the presence of GO(12.8○)peaks.The scanning electron microscope(SEM)showed anchored wrinkled GO sheets on zinc oxide with an average particle size of 2.93μm.Energy-dispersive X-ray spectroscopy(EDX)confirmed the elemental composition,and Fouriertransform infrared spectroscopy(FTIR)revealed the impact of GO on functional groups and electrochemical behavior.Photoluminescence(PL)wavelength of(439 nm)and band gap of(2.81 eV)show that the material is suitable for energy applications in nanocomposites.Smart nanocomposite materials with improved performance in energy storage and related applications were fabricated by combining synthesis,characterization,fuzzy logic,and machine learning in this work.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R104)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively.This study introduces an advanced,explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets,which reflects real-world network behavior through a blend of normal and diverse attack classes.The methodology begins with sophisticated data preprocessing,incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions,ensuring standardized and model-ready inputs.Critical dimensionality reduction is achieved via the Harris Hawks Optimization(HHO)algorithm—a nature-inspired metaheuristic modeled on hawks’hunting strategies.HHO efficiently identifies the most informative features by optimizing a fitness function based on classification performance.Following feature selection,the SMOTE is applied to the training data to resolve class imbalance by synthetically augmenting underrepresented attack types.The stacked architecture is then employed,combining the strengths of XGBoost,SVM,and RF as base learners.This layered approach improves prediction robustness and generalization by balancing bias and variance across diverse classifiers.The model was evaluated using standard classification metrics:precision,recall,F1-score,and overall accuracy.The best overall performance was recorded with an accuracy of 99.44%for UNSW-NB15,demonstrating the model’s effectiveness.After balancing,the model demonstrated a clear improvement in detecting the attacks.We tested the model on four datasets to show the effectiveness of the proposed approach and performed the ablation study to check the effect of each parameter.Also,the proposed model is computationaly efficient.To support transparency and trust in decision-making,explainable AI(XAI)techniques are incorporated that provides both global and local insight into feature contributions,and offers intuitive visualizations for individual predictions.This makes it suitable for practical deployment in cybersecurity environments that demand both precision and accountability.
基金supported by the special fund of the National Clinical Key Specialty Construction Program[(2022)301-2305].
文摘BACKGROUND:This study aims to develop and validate a machine learning-based in-hospital mortality predictive model for acute aortic syndrome(AAS)in the emergency department(ED)and to derive a simplifi ed version suitable for rapid clinical application.METHODS:In this multi-center retrospective cohort study,AAS patient data from three hospitals were analyzed.The modeling cohort included data from the First Affiliated Hospital of Zhengzhou University and the People’s Hospital of Xinjiang Uygur Autonomous Region,with Peking University Third Hospital data serving as the external test set.Four machine learning algorithms—logistic regression(LR),multilayer perceptron(MLP),Gaussian naive Bayes(GNB),and random forest(RF)—were used to develop predictive models based on 34 early-accessible clinical variables.A simplifi ed model was then derived based on fi ve key variables(Stanford type,pericardial eff usion,asymmetric peripheral arterial pulsation,decreased bowel sounds,and dyspnea)via Least Absolute Shrinkage and Selection Operator(LASSO)regression to improve ED applicability.RESULTS:A total of 929 patients were included in the modeling cohort,and 210 were included in the external test set.Four machine learning models based on 34 clinical variables were developed,achieving internal and external validation AUCs of 0.85-0.90 and 0.73-0.85,respectively.The simplifi ed model incorporating fi ve key variables demonstrated internal and external validation AUCs of 0.71-0.86 and 0.75-0.78,respectively.Both models showed robust calibration and predictive stability across datasets.CONCLUSION:Both kinds of models were built based on machine learning tools,and proved to have certain prediction performance and extrapolation.
文摘Providing safe and quality food is crucial for every household and is of extreme significance in the growth of any society.It is a complex procedure that deals with all issues focusing on the development of food processing from seed to harvest,storage,preparation,and consumption.This current paper seeks to demystify the importance of artificial intelligence,machine learning(ML),deep learning(DL),and computer vision(CV)in ensuring food safety and quality.By stressing the importance of these technologies,the audience will feel reassured and confident in their potential.These are very handy for such problems,giving assurance over food safety.CV is incredibly noble in today's generation because it improves food processing quality and positively impacts firms and researchers.Thus,at the present production stage,rich in image processing and computer visioning is incorporated into all facets of food production.In this field,DL and ML are implemented to identify the type of food in addition to quality.Concerning data and result-oriented perceptions,one has found similarities regarding various approaches.As a result,the findings of this study will be helpful for scholars looking for a proper approach to identify the quality of food offered.It helps to indicate which food products have been discussed by other scholars and lets the reader know papers by other scholars inclined to research further.Also,DL is accurately integrated with identifying the quality and safety of foods in the market.This paper describes the current practices and concerns of ML,DL,and probable trends for its future development.
文摘Heterogeneous catalysis is a complex,multiscale phenomenon in which reactions occur at dynamically evolving surfaces.A longstanding goal is to probe these processes to distill design rules for novel catalytic materials,a capability that is essential to the transition toward a sustainable future[1–3].
基金The National Key Research and Development Program of China,No.2023YFC3206601。
文摘Landslides pose a formidable natural hazard across the Qinghai-Tibet Plateau(QTP),endangering both ecosystems and human life.Identifying the driving factors behind landslides and accurately assessing susceptibility are key to mitigating disaster risk.This study integrated multi-source historical landslide data with 15 predictive factors and used several machine learning models—Random Forest(RF),Gradient Boosting Regression Trees(GBRT),Extreme Gradient Boosting(XGBoost),and Categorical Boosting(CatBoost)—to generate susceptibility maps.The Shapley additive explanation(SHAP)method was applied to quantify factor importance and explore their nonlinear effects.The results showed that:(1)CatBoost was the best-performing model(CA=0.938,AUC=0.980)in assessing landslide susceptibility,with altitude emerging as the most significant factor,followed by distance to roads and earthquake sites,precipitation,and slope;(2)the SHAP method revealed critical nonlinear thresholds,demonstrating that historical landslides were concentrated at mid-altitudes(1400-4000 m)and decreased markedly above 4000 m,with a parallel reduction in probability beyond 700 m from roads;and(3)landslide-prone areas,comprising 13%of the QTP,were concentrated in the southeastern and northeastern parts of the plateau.By integrating machine learning and SHAP analysis,this study revealed landslide hazard-prone areas and their driving factors,providing insights to support disaster management strategies and sustainable regional planning.
文摘Accurate prediction of concrete compressive strength is fundamental for optimizing mix designs,improving material utilization,and ensuring structural safety in modern construction.Traditional empirical methods often fail to capture the non-linear relationships among concrete constituents,especially with the growing use of supple-mentary cementitious materials and recycled aggregates.This study presents an integrated machine learning framework for concrete strength prediction,combining advanced regression models—namely CatBoost—with metaheuristic optimization algorithms,with a particular focus on the Somersaulting Spider Optimizer(SSO).A comprehensive dataset encompassing diverse mix proportions and material types was used to evaluate baseline machine learning models,including CatBoost,XGBoost,ExtraTrees,and RandomForest.Among these,CatBoost demonstrated superior accuracy across multiple performance metrics.To further enhance predictive capability,several bio-inspired optimizers were employed for hyperparameter tuning.The SSO-CatBoost hybrid achieved the lowest mean squared error and highest correlation coefficients,outperforming other metaheuristic approaches such as Genetic Algorithm,Particle Swarm Optimization,and Grey Wolf Optimizer.Statistical significance was established through Analysis of Variance and Wilcoxon signed-rank testing,confirming the robustness of the optimized models.The proposed methodology not only delivers improved predictive performance but also offers a transparent framework for mix design optimization,supporting data-driven decision making in sustainable and resilient infrastructure development.
文摘Objective:The increasing global prevalence of mental health disorders highlights the urgent need for the development of innovative diagnostic methods.Conditions such as anxiety,depression,stress,bipolar disorder(BD),and autism spectrum disorder(ASD)frequently arise from the complex interplay of demographic,biological,and socioeconomic factors,resulting in aggravated symptoms.This review investigates machine intelligence approaches for the early detection and prediction of mental health conditions.Methods:The preferred reporting items for systematic reviews and meta-analyses(PRISMA)framework was employed to conduct a systematic review and analysis covering the period 2018 to 2025.The potential impact of machine intelligence methods was assessed by considering various strategies,hybridization of algorithms,tools,techniques,and datasets,and their applicability.Results:Through a systematic review of studies concentrating on the prediction and evaluation of mental disorders using machine intelligence algorithms,advancements,limitations,and gaps in current methodologies were highlighted.The datasets and tools utilized in these investigations were examined,offering a detailed overview of the status of computational models in understanding and diagnosing mental health disorders.Recent research indicated considerable improvements in diagnostic accuracy and treatment effectiveness,particularly for depression and anxiety,which have shown the greatest methodological diversity and notable advancements in machine intelligence.Conclusions:Despite these improvements,challenges persist,including the need for more diverse datasets,ethical issues surrounding data privacy and algorithmic bias,and obstacles to integrating these technologies into clinical settings.This synthesis emphasizes the transformative potential of machine intelligence in enhancing mental healthcare.
基金partially supported by the NSF(Grant No.DMS 2324368)by the Office of the Vice Chancellor for Research and Graduate Education at the University of Wisconsin-Madison with funding from the Wisconsin Alumni Research Foundation.
文摘In this paper,we propose a systematic approach for accelerating finite element-type methods by machine learning for the numerical solution of partial differential equations(PDEs).The main idea is to use a neural network to learn the solution map of the PDEs and to do so in an element-wise fashion.This map takes input of the element geometry and the PDE’s parameters on that element,and gives output of two operators:(1)the in2out operator for inter-element communication,and(2)the in2sol operator(Green’s function)for element-wise solution recovery.A significant advantage of this approach is that,once trained,this network can be used for the numerical solution of the PDE for any domain geometry and any parameter distribution without retraining.Also,the training is significantly simpler since it is done on the element level instead on the entire domain.We call this approach element learning.This method is closely related to hybridizable discontinuous Galerkin(HDG)methods in the sense that the local solvers of HDG are replaced by machine learning approaches.Numerical tests are presented for an example PDE,the radiative transfer or radiation transport equation,in a variety of scenarios with idealized or realistic cloud fields,with smooth or sharp gradient in the cloud boundary transition.Under a fixed accuracy level of 10^(−3) in the relative L^(2) error,and polynomial degree p=6 in each element,we observe an approximately 5 to 10 times speed-up by element learning compared to a classical finite element-type method.
基金financially supported by the National Key Research and Development Project of China(No.2022YFB3806900)。
文摘The complex interactions and conflicting performance demands in multi-component composites pose significant challenges for achieving balanced multi-property optimization through conventional trial-and-error approaches.Machine learning(ML)offers a promising solution,markedly improving materials discovery efficiency.However,the high dimensionality of feature spaces in such systems has long impeded effective ML-driven feature representation and inverse design.To overcome this,we present an Intelligent Screening System(ISS)framework to accelerate the discovery of optimal formulations balancing four key properties in 15-component PTFE-based copper-clad laminate composites(PTFE-CCLCs).ISS adopts modular descriptors based on the physical information of component volume fractions,thereby simplifying the feature representation.By leveraging the inverse prediction capability of ML models and constructing a performance-driven virtual candidate database,ISS significantly reduced the computational complexity associated with high-dimensional spaces.Experimental validation confirmed that ISSoptimized formulations exhibited superior synergy,notably resolving the trade-off between thermal conductivity and peel strength,and outperform many commercial counterparts.Despite limited data and inherent process variability,ISS achieved an average prediction accuracy of 76.5%,with thermal conductivity predictions exceeding 90%,demonstrating robust reliability.This work provides an innovative,efficient strategy for multifunctional optimization and accelerated discovery in ultra-complex composite systems,highlighting the integration of ML and advanced materials design.
基金Project(42077244)supported by the National Natural Science Foundation of ChinaProject(2020-05)supported by the Open Research Fund of Guangdong Provincial Key Laboratory of Deep Earth Sciences and Geothermal Energy Exploitation and Utilization,China。
文摘Accurate prediction of rockburst intensity levels is crucial for ensuring the safety of deep hard rock engineering construction.This paper introduced an expert system for rockburst intensity level prediction that employs machine learning algorithms as the basis for its inference rules.The system comprises four modules:a database,a repository,an inference engine,and an interpreter.A database containing 1114 rockburst cases was used to construct 357 datasets that serve as the repository for the expert system.Additionally,19 types of machine learning algorithms were used to establish 6783 micro-models to construct cognitive rules within the inference engine.By integrating probability theory and marginal analysis,a fuzzy scoring method based on the SoftMax function was developed and applied to the interpreter for rockburst intensity level prediction,effectively restoring the continuity of rockburst characteristics.The research results indicate that ensemble algorithms based on decision trees are more effective in capturing the characteristics of rockburst.Key factors for accurate prediction of rockburst intensity include uniaxial compressive strength,elastic energy index,the maximum principal stress,tangential stress,and their composite indicators.The accuracy of the proposed rockburst intensity level prediction expert system was verified using 20 engineering rockburst cases,with predictions aligning closely with the actual rockburst intensity levels.
文摘Post-kidney transplant rejection is a critical factor influencing transplant success rates and the survival of transplanted organs.With the rapid advancement of artificial intelligence technologies,machine learning(ML)has emerged as a powerful data analysis tool,widely applied in the prediction,diagnosis,and mechanistic study of kidney transplant rejection.This mini-review systematically summarizes the recent applications of ML techniques in post-kidney transplant rejection,covering areas such as the construction of predictive models,identification of biomarkers,analysis of pathological images,assessment of immune cell infiltration,and formulation of personalized treatment strategies.By integrating multi-omics data and clinical information,ML has significantly enhanced the accuracy of early rejection diagnosis and the capability for prognostic evaluation,driving the development of precision medicine in the field of kidney transplantation.Furthermore,this article discusses the challenges faced in existing research and potential future directions,providing a theoretical basis and technical references for related studies.