The uplift resistance of the soil overlying shield tunnels significantly impacts their anti-floating stability.However,research on uplift resistance concerning special-shaped shield tunnels is limited.This study combi...The uplift resistance of the soil overlying shield tunnels significantly impacts their anti-floating stability.However,research on uplift resistance concerning special-shaped shield tunnels is limited.This study combines numerical simulation with machine learning techniques to explore this issue.It presents a summary of special-shaped tunnel geometries and introduces a shape coefficient.Through the finite element software,Plaxis3D,the study simulates six key parameters—shape coefficient,burial depth ratio,tunnel’s longest horizontal length,internal friction angle,cohesion,and soil submerged bulk density—that impact uplift resistance across different conditions.Employing XGBoost and ANN methods,the feature importance of each parameter was analyzed based on the numerical simulation results.The findings demonstrate that a tunnel shape more closely resembling a circle leads to reduced uplift resistance in the overlying soil,whereas other parameters exhibit the contrary effects.Furthermore,the study reveals a diminishing trend in the feature importance of buried depth ratio,internal friction angle,tunnel longest horizontal length,cohesion,soil submerged bulk density,and shape coefficient in influencing uplift resistance.展开更多
Countries around the world have been making efforts to reduce pollutant emissions. However, the response of global black carbon(BC) aging to emission changes remains unclear. Using the Community Atmosphere Model versi...Countries around the world have been making efforts to reduce pollutant emissions. However, the response of global black carbon(BC) aging to emission changes remains unclear. Using the Community Atmosphere Model version 6 with a machine-learning-integrated four-mode version of the Modal Aerosol Module, we quantify global BC aging responses to emission reductions for 2011–2018 and for 2050 and 2100 under carbon neutrality. During 2011–18, global trends in BC aging degree(mass ratio of coatings to BC, R_(BC)) exhibited marked regional disparities, with a significant increase in China(5.4% yr^(-1)), which contrasts with minimal changes in the USA, Europe, and India. The divergence is attributed to opposing trends in secondary organic aerosol(SOA) and sulfate coatings, driven by regional changes in the emission ratios of corresponding coating precursors to BC(volatile organic compounds-VOCs/BC and SO_(2)/BC). Projections under carbon neutrality reveal that R_(BC) will increase globally by 47%(118%) in 2050(2100), with strong convergent increases expected across major source regions. The R_(BC) increase, primarily driven by enhanced SOA coatings due to sharper BC reductions relative to VOCs, will enhance the global BC mass absorption cross-section(MAC) by 11%(17%) in 2050(2100).Consequently, although the global BC burden will decline sharply by 60%(76%), the enhanced MAC partially offsets the magnitude of the decline in the BC direct radiative effect, resulting in the moderation of global BC DRE decreases to 88%(92%) of the BC burden reductions in 2050(2100). This study highlights the globally enhanced BC aging and light absorption capacity under carbon neutrality, thereby partly offsetting the impact of BC direct emission reductions on future changes in BC radiative effects globally.展开更多
The advantages of genome selection(GS) in animal and plant breeding are self-evident.Traditional parametric models have disadvantage in better fit the increasingly large sequencing data and capture complex effects acc...The advantages of genome selection(GS) in animal and plant breeding are self-evident.Traditional parametric models have disadvantage in better fit the increasingly large sequencing data and capture complex effects accurately.Machine learning models have demonstrated remarkable potential in addressing these challenges.In this study,we introduced the concept of mixed kernel functions to explore the performance of support vector machine regression(SVR) in GS.Six single kernel functions(SVR_L,SVR_C,SVR_G,SVR_P,SVR_S,SVR_L) and four mixed kernel functions(SVR_GS,SVR_GP,SVR_LS,SVR_LP) were used to predict genome breeding values.The prediction accuracy,mean squared error(MSE) and mean absolute error(MAE) were used as evaluation indicators to compare with two traditional parametric models(GBLUP,BayesB) and two popular machine learning models(RF,KcRR).The results indicate that in most cases,the performance of the mixed kernel function model significantly outperforms that of GBLUP,BayesB and single kernel function.For instance,for T1 in the pig dataset,the predictive accuracy of SVR_GS is improved by 10% compared to GBLUP,and by approximately 4.4 and 18.6% compared to SVR_G and SVR_S respectively.For E1 in the wheat dataset,SVR_GS achieves 13.3% higher prediction accuracy than GBLUP.Among single kernel functions,the Laplacian and Gaussian kernel functions yield similar results,with the Gaussian kernel function performing better.The mixed kernel function notably reduces the MSE and MAE when compared to all single kernel functions.Furthermore,regarding runtime,SVR_GS and SVR_GP mixed kernel functions run approximately three times faster than GBLUP in the pig dataset,with only a slight increase in runtime compared to the single kernel function model.In summary,the mixed kernel function model of SVR demonstrates speed and accuracy competitiveness,and the model such as SVR_GS has important application potential for GS.展开更多
Accurate retrieval of atmospheric vertical profiles is critical for improving weather prediction and climate monitoring.However,the complexity of atmospheric processes in cloudy regions poses challenges compared to th...Accurate retrieval of atmospheric vertical profiles is critical for improving weather prediction and climate monitoring.However,the complexity of atmospheric processes in cloudy regions poses challenges compared to those of clear sky scenarios.This study presents a novel framework that integrates Bayesian optimization and machine learning approaches to retrieve atmospheric vertical profiles—including temperature,humidity,ozone concentration,cloud fraction,ice water content(IWC),and liquid water content(LWC)—from hyperspectral infrared observations.Specifically,a Bayesian method was used to refine ERA5 reanalysis data by minimizing brightness temperature(BT)discrepancies against FY-4B Geostationary Interferometric Infrared Sounder(GIIRS)observations,generating a high-quality profile database(~2.8 million profiles)across diverse weather systems.The optimized profiles improve radiative consistency,reducing BT biases from>40 K to<10 K in cloudy regions.To further overcome the limitations of the Bayesian method,we developed a Transformer-Resnet hybrid model(TERNet),which achieved superior performance with RMSE values of 1.61 K(temperature),5.77%(humidity),and 2.25×10^(–6)/6.09×10^(–6)kg kg^(–1)(IWC/LWC)across the entire vertical levels in all-sky conditions.The TERNet outperforms both ERA5 in cloud parameter retrieval and the GIIRS L2 product in thermodynamic profiling.Independent verification with radiosonde and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations(CALIPSO)datasets confirms the framework's reliability across various meteorological regimes.This work demonstrates the capability of combining physics-informed Bayesian methods with data-driven machine learning to fully exploit hyperspectral IR data.展开更多
Nasopharyngeal carcinoma(NPC)is a malignant tumor prevalent in southern China and Southeast Asia,where its early detection is crucial for improving patient prognosis and reducing mortality rates.However,existing scree...Nasopharyngeal carcinoma(NPC)is a malignant tumor prevalent in southern China and Southeast Asia,where its early detection is crucial for improving patient prognosis and reducing mortality rates.However,existing screening methods suffer from limitations in accuracy and accessibility,hindering their application in large-scale population screening.In this work,a surface-enhanced Raman spectroscopy(SERS)-based method was established to explore the profiles of different stratified components in saliva from NPC and healthy subjects after fractionation processing.The study findings indicate that all fractionated samples exhibit diseaseassociated molecular signaling differences,where small-molecule(molecular weight cut-offvalue is 10 kDa)demonstrating superior classification capabilities with sensitivity of 90.5%and speci-ficity of 75.6%,area under receiver operating characteristic(ROC)curve of 0:925±0:031.The primary objective of this study was to qualitatively explore patterns in saliva composition across groups.The proposed SERS detection strategy for fractionated saliva offers novel insights for enhancing the sensitivity and reliability of noninvasive NPC screening,laying the foundation for translational application in large-scale clinical settings.展开更多
Artificial intelligence(AI)based models have been used to predict the structural,optical,mechanical,and electrochemical properties of zinc oxide/graphene oxide nanocomposites.Machine learning(ML)models such as Artific...Artificial intelligence(AI)based models have been used to predict the structural,optical,mechanical,and electrochemical properties of zinc oxide/graphene oxide nanocomposites.Machine learning(ML)models such as Artificial Neural Networks(ANN),Support Vector Regression(SVR),Multilayer Perceptron(MLP),and hybrid,along with fuzzy logic tools,were applied to predict the different properties like wavelength at maximum intensity(444 nm),crystallite size(17.50 nm),and optical bandgap(2.85 eV).While some other properties,such as energy density,power density,and charge transfer resistance,were also predicted with the help of datasets of 1000(80:20).In general,the energy parameters were predicted more accurately by hybrid models.The hydrothermal method was used to synthesize graphene oxide(GO)and zinc oxide(ZnO)nanocomposites.The increased surface area,conductivity,and stability of graphene oxide in zinc oxide nanoparticles make the composite an ideal option for energy storage.X-ray diffraction(XRD)confirmed the crystallite size of 17.41 nm for the nanocomposite and the presence of GO(12.8○)peaks.The scanning electron microscope(SEM)showed anchored wrinkled GO sheets on zinc oxide with an average particle size of 2.93μm.Energy-dispersive X-ray spectroscopy(EDX)confirmed the elemental composition,and Fouriertransform infrared spectroscopy(FTIR)revealed the impact of GO on functional groups and electrochemical behavior.Photoluminescence(PL)wavelength of(439 nm)and band gap of(2.81 eV)show that the material is suitable for energy applications in nanocomposites.Smart nanocomposite materials with improved performance in energy storage and related applications were fabricated by combining synthesis,characterization,fuzzy logic,and machine learning in this work.展开更多
Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness a...Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively.This study introduces an advanced,explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets,which reflects real-world network behavior through a blend of normal and diverse attack classes.The methodology begins with sophisticated data preprocessing,incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions,ensuring standardized and model-ready inputs.Critical dimensionality reduction is achieved via the Harris Hawks Optimization(HHO)algorithm—a nature-inspired metaheuristic modeled on hawks’hunting strategies.HHO efficiently identifies the most informative features by optimizing a fitness function based on classification performance.Following feature selection,the SMOTE is applied to the training data to resolve class imbalance by synthetically augmenting underrepresented attack types.The stacked architecture is then employed,combining the strengths of XGBoost,SVM,and RF as base learners.This layered approach improves prediction robustness and generalization by balancing bias and variance across diverse classifiers.The model was evaluated using standard classification metrics:precision,recall,F1-score,and overall accuracy.The best overall performance was recorded with an accuracy of 99.44%for UNSW-NB15,demonstrating the model’s effectiveness.After balancing,the model demonstrated a clear improvement in detecting the attacks.We tested the model on four datasets to show the effectiveness of the proposed approach and performed the ablation study to check the effect of each parameter.Also,the proposed model is computationaly efficient.To support transparency and trust in decision-making,explainable AI(XAI)techniques are incorporated that provides both global and local insight into feature contributions,and offers intuitive visualizations for individual predictions.This makes it suitable for practical deployment in cybersecurity environments that demand both precision and accountability.展开更多
BACKGROUND:This study aims to develop and validate a machine learning-based in-hospital mortality predictive model for acute aortic syndrome(AAS)in the emergency department(ED)and to derive a simplifi ed version suita...BACKGROUND:This study aims to develop and validate a machine learning-based in-hospital mortality predictive model for acute aortic syndrome(AAS)in the emergency department(ED)and to derive a simplifi ed version suitable for rapid clinical application.METHODS:In this multi-center retrospective cohort study,AAS patient data from three hospitals were analyzed.The modeling cohort included data from the First Affiliated Hospital of Zhengzhou University and the People’s Hospital of Xinjiang Uygur Autonomous Region,with Peking University Third Hospital data serving as the external test set.Four machine learning algorithms—logistic regression(LR),multilayer perceptron(MLP),Gaussian naive Bayes(GNB),and random forest(RF)—were used to develop predictive models based on 34 early-accessible clinical variables.A simplifi ed model was then derived based on fi ve key variables(Stanford type,pericardial eff usion,asymmetric peripheral arterial pulsation,decreased bowel sounds,and dyspnea)via Least Absolute Shrinkage and Selection Operator(LASSO)regression to improve ED applicability.RESULTS:A total of 929 patients were included in the modeling cohort,and 210 were included in the external test set.Four machine learning models based on 34 clinical variables were developed,achieving internal and external validation AUCs of 0.85-0.90 and 0.73-0.85,respectively.The simplifi ed model incorporating fi ve key variables demonstrated internal and external validation AUCs of 0.71-0.86 and 0.75-0.78,respectively.Both models showed robust calibration and predictive stability across datasets.CONCLUSION:Both kinds of models were built based on machine learning tools,and proved to have certain prediction performance and extrapolation.展开更多
This study investigates the uncertain dynamic characterization of hybrid composite plates by employing advanced machine-assisted finite element methodologies.Hybrid composites,widely used in aerospace,automotive,and s...This study investigates the uncertain dynamic characterization of hybrid composite plates by employing advanced machine-assisted finite element methodologies.Hybrid composites,widely used in aerospace,automotive,and structural applications,often face variability in material properties,geometric configurations,and manufacturing processes,leading to uncertainty in their dynamic response.To address this,three surrogate-based machine learning approaches like radial basis function(RBF),multivariate adaptive regression splines(MARS),and polynomial neural networks(PNN)are integrated with a finite element framework to efficiently capture the stochastic behavior of these plates.The research focuses on predicting the first three natural frequencies under material uncertainties,which are critical to ensuring structural reliability.Monte Carlo simulation(MCS)is used as a benchmark for generating probabilistic datasets,including mean values,standard deviations,and probability density functions.The surrogate models are then trained and validated against these datasets,enabling accurate representation of uncertainty with substantially fewer samples compared to conventionalMCS.Among the methods studied,the RBFmodel demonstrates superior performance,closely approximating MCS results with a reduced sample size,thereby achieving significant computational savings.The proposed framework not only reduces computational time and costs but also maintains high predictive accuracy,making it well-suited for complex engineering systems.Beyond free vibration analysis,the methodology can be extended to more sophisticated scenarios,such as forced vibration,damping effects,and nonlinear structural responses.Overall,this work presents a computationally efficient and robust approach for surrogate-based uncertainty quantification,advancing the analysis and design of hybrid composite structures under uncertainty.展开更多
The rational design of high-performance electrochemical energy storage devices critically depends on a fundamental understanding of ion-electrode interactions at the molecular scale.Herein,we employ interpretable mach...The rational design of high-performance electrochemical energy storage devices critically depends on a fundamental understanding of ion-electrode interactions at the molecular scale.Herein,we employ interpretable machine learning(ML)to reveal electrolyte hydration energy as a universal descriptor governing ion-specific capacitance in two-dimensional(2D)materials.Through explainable ML,we elucidate how ion hydration shell stability and size critically influence charge transport and storage at the electrode-electrolyte interface.Our analysis identifies hydration energy-not ionic size-as the primary factor dictating capacitance,challenging prevailing assumptions and providing quantifiable design rules for electrolyte selection.These insights offer a data-driven pathway to optimize 2D materials for supercapacitors and beyond,including batteries and electrocatalytic systems.This work demonstrates the power of explainable artificial intelligence in uncovering molecular-level mechanisms that accelerate the discovery and development of next-generation energy storage technologies.展开更多
Accurate prediction of rockburst intensity levels is crucial for ensuring the safety of deep hard rock engineering construction.This paper introduced an expert system for rockburst intensity level prediction that empl...Accurate prediction of rockburst intensity levels is crucial for ensuring the safety of deep hard rock engineering construction.This paper introduced an expert system for rockburst intensity level prediction that employs machine learning algorithms as the basis for its inference rules.The system comprises four modules:a database,a repository,an inference engine,and an interpreter.A database containing 1114 rockburst cases was used to construct 357 datasets that serve as the repository for the expert system.Additionally,19 types of machine learning algorithms were used to establish 6783 micro-models to construct cognitive rules within the inference engine.By integrating probability theory and marginal analysis,a fuzzy scoring method based on the SoftMax function was developed and applied to the interpreter for rockburst intensity level prediction,effectively restoring the continuity of rockburst characteristics.The research results indicate that ensemble algorithms based on decision trees are more effective in capturing the characteristics of rockburst.Key factors for accurate prediction of rockburst intensity include uniaxial compressive strength,elastic energy index,the maximum principal stress,tangential stress,and their composite indicators.The accuracy of the proposed rockburst intensity level prediction expert system was verified using 20 engineering rockburst cases,with predictions aligning closely with the actual rockburst intensity levels.展开更多
Landslides pose a formidable natural hazard across the Qinghai-Tibet Plateau(QTP),endangering both ecosystems and human life.Identifying the driving factors behind landslides and accurately assessing susceptibility ar...Landslides pose a formidable natural hazard across the Qinghai-Tibet Plateau(QTP),endangering both ecosystems and human life.Identifying the driving factors behind landslides and accurately assessing susceptibility are key to mitigating disaster risk.This study integrated multi-source historical landslide data with 15 predictive factors and used several machine learning models—Random Forest(RF),Gradient Boosting Regression Trees(GBRT),Extreme Gradient Boosting(XGBoost),and Categorical Boosting(CatBoost)—to generate susceptibility maps.The Shapley additive explanation(SHAP)method was applied to quantify factor importance and explore their nonlinear effects.The results showed that:(1)CatBoost was the best-performing model(CA=0.938,AUC=0.980)in assessing landslide susceptibility,with altitude emerging as the most significant factor,followed by distance to roads and earthquake sites,precipitation,and slope;(2)the SHAP method revealed critical nonlinear thresholds,demonstrating that historical landslides were concentrated at mid-altitudes(1400-4000 m)and decreased markedly above 4000 m,with a parallel reduction in probability beyond 700 m from roads;and(3)landslide-prone areas,comprising 13%of the QTP,were concentrated in the southeastern and northeastern parts of the plateau.By integrating machine learning and SHAP analysis,this study revealed landslide hazard-prone areas and their driving factors,providing insights to support disaster management strategies and sustainable regional planning.展开更多
Accurate prediction of concrete compressive strength is fundamental for optimizing mix designs,improving material utilization,and ensuring structural safety in modern construction.Traditional empirical methods often f...Accurate prediction of concrete compressive strength is fundamental for optimizing mix designs,improving material utilization,and ensuring structural safety in modern construction.Traditional empirical methods often fail to capture the non-linear relationships among concrete constituents,especially with the growing use of supple-mentary cementitious materials and recycled aggregates.This study presents an integrated machine learning framework for concrete strength prediction,combining advanced regression models—namely CatBoost—with metaheuristic optimization algorithms,with a particular focus on the Somersaulting Spider Optimizer(SSO).A comprehensive dataset encompassing diverse mix proportions and material types was used to evaluate baseline machine learning models,including CatBoost,XGBoost,ExtraTrees,and RandomForest.Among these,CatBoost demonstrated superior accuracy across multiple performance metrics.To further enhance predictive capability,several bio-inspired optimizers were employed for hyperparameter tuning.The SSO-CatBoost hybrid achieved the lowest mean squared error and highest correlation coefficients,outperforming other metaheuristic approaches such as Genetic Algorithm,Particle Swarm Optimization,and Grey Wolf Optimizer.Statistical significance was established through Analysis of Variance and Wilcoxon signed-rank testing,confirming the robustness of the optimized models.The proposed methodology not only delivers improved predictive performance but also offers a transparent framework for mix design optimization,supporting data-driven decision making in sustainable and resilient infrastructure development.展开更多
Objective:The increasing global prevalence of mental health disorders highlights the urgent need for the development of innovative diagnostic methods.Conditions such as anxiety,depression,stress,bipolar disorder(BD),a...Objective:The increasing global prevalence of mental health disorders highlights the urgent need for the development of innovative diagnostic methods.Conditions such as anxiety,depression,stress,bipolar disorder(BD),and autism spectrum disorder(ASD)frequently arise from the complex interplay of demographic,biological,and socioeconomic factors,resulting in aggravated symptoms.This review investigates machine intelligence approaches for the early detection and prediction of mental health conditions.Methods:The preferred reporting items for systematic reviews and meta-analyses(PRISMA)framework was employed to conduct a systematic review and analysis covering the period 2018 to 2025.The potential impact of machine intelligence methods was assessed by considering various strategies,hybridization of algorithms,tools,techniques,and datasets,and their applicability.Results:Through a systematic review of studies concentrating on the prediction and evaluation of mental disorders using machine intelligence algorithms,advancements,limitations,and gaps in current methodologies were highlighted.The datasets and tools utilized in these investigations were examined,offering a detailed overview of the status of computational models in understanding and diagnosing mental health disorders.Recent research indicated considerable improvements in diagnostic accuracy and treatment effectiveness,particularly for depression and anxiety,which have shown the greatest methodological diversity and notable advancements in machine intelligence.Conclusions:Despite these improvements,challenges persist,including the need for more diverse datasets,ethical issues surrounding data privacy and algorithmic bias,and obstacles to integrating these technologies into clinical settings.This synthesis emphasizes the transformative potential of machine intelligence in enhancing mental healthcare.展开更多
The complex interactions and conflicting performance demands in multi-component composites pose significant challenges for achieving balanced multi-property optimization through conventional trial-and-error approaches...The complex interactions and conflicting performance demands in multi-component composites pose significant challenges for achieving balanced multi-property optimization through conventional trial-and-error approaches.Machine learning(ML)offers a promising solution,markedly improving materials discovery efficiency.However,the high dimensionality of feature spaces in such systems has long impeded effective ML-driven feature representation and inverse design.To overcome this,we present an Intelligent Screening System(ISS)framework to accelerate the discovery of optimal formulations balancing four key properties in 15-component PTFE-based copper-clad laminate composites(PTFE-CCLCs).ISS adopts modular descriptors based on the physical information of component volume fractions,thereby simplifying the feature representation.By leveraging the inverse prediction capability of ML models and constructing a performance-driven virtual candidate database,ISS significantly reduced the computational complexity associated with high-dimensional spaces.Experimental validation confirmed that ISSoptimized formulations exhibited superior synergy,notably resolving the trade-off between thermal conductivity and peel strength,and outperform many commercial counterparts.Despite limited data and inherent process variability,ISS achieved an average prediction accuracy of 76.5%,with thermal conductivity predictions exceeding 90%,demonstrating robust reliability.This work provides an innovative,efficient strategy for multifunctional optimization and accelerated discovery in ultra-complex composite systems,highlighting the integration of ML and advanced materials design.展开更多
Post-kidney transplant rejection is a critical factor influencing transplant success rates and the survival of transplanted organs.With the rapid advancement of artificial intelligence technologies,machine learning(ML...Post-kidney transplant rejection is a critical factor influencing transplant success rates and the survival of transplanted organs.With the rapid advancement of artificial intelligence technologies,machine learning(ML)has emerged as a powerful data analysis tool,widely applied in the prediction,diagnosis,and mechanistic study of kidney transplant rejection.This mini-review systematically summarizes the recent applications of ML techniques in post-kidney transplant rejection,covering areas such as the construction of predictive models,identification of biomarkers,analysis of pathological images,assessment of immune cell infiltration,and formulation of personalized treatment strategies.By integrating multi-omics data and clinical information,ML has significantly enhanced the accuracy of early rejection diagnosis and the capability for prognostic evaluation,driving the development of precision medicine in the field of kidney transplantation.Furthermore,this article discusses the challenges faced in existing research and potential future directions,providing a theoretical basis and technical references for related studies.展开更多
Delayed wound healing following radical gastrectomy remains an important yet underappreciated complication that prolongs hospitalization,increases costs,and undermines patient recovery.In An et al’s recent study,the ...Delayed wound healing following radical gastrectomy remains an important yet underappreciated complication that prolongs hospitalization,increases costs,and undermines patient recovery.In An et al’s recent study,the authors present a machine learning-based risk prediction approach using routinely available clinical and laboratory parameters.Among the evaluated algorithms,a decision tree model demonstrated excellent discrimination,achieving an area under the curve of 0.951 in the validation set and notably identifying all true cases of delayed wound healing at the Youden index threshold.The inclusion of variables such as drainage duration,preoperative white blood cell and neutrophil counts,alongside age and sex,highlights the pragmatic appeal of the model for early postoperative monitoring.Nevertheless,several aspects warrant critical reflection,including the reliance on a postoperative variable(drainage duration),internal validation only,and certain reporting inconsistencies.This letter underscores both the promise and the limitations of adopting interpretable machine learning models in perioperative care.We advocate for transparent reporting,external validation,and careful consideration of clinically actionable timepoints before integration into practice.Ultimately,this work represents a valuable step toward precision risk stratification in gastric cancer surgery,and sets the stage for multicenter,prospective evaluations.展开更多
Gastrointestinal(GI)cancers remain a leading cause of cancer-related morbidity and mortality worldwide.Artificial intelligence(AI),particularly machine learning and deep learning(DL),has shown promise in enhancing can...Gastrointestinal(GI)cancers remain a leading cause of cancer-related morbidity and mortality worldwide.Artificial intelligence(AI),particularly machine learning and deep learning(DL),has shown promise in enhancing cancer detection,diagnosis,and prognostication.A narrative review of literature published from January 2015 to march 2025 was conducted using PubMed,Web of Science,and Scopus.Search terms included"gastrointestinal cancer","artificial intelligence","machine learning","deep learning","radiomics","multimodal detection"and"predictive modeling".Studies were included if they focused on clinically relevant AI applications in GI oncology.AI algorithms for GI cancer detection have achieved high performance across imaging modalities,with endoscopic DL systems reporting accuracies of 85%-97%for polyp detection and segmentation.Radiomics-based models have predicted molecular biomarkers such as programmed cell death ligand 2 expression with area under the curves up to 0.92.Large language models applied to radiology reports demonstrated diagnostic accuracy comparable to junior radiologists(78.9%vs 80.0%),though without incremental value when combined with human interpretation.Multimodal AI approaches integrating imaging,pathology,and clinical data show emerging potential for precision oncology.AI in GI oncology has reached clinically relevant accuracy levels in multiple diagnostic tasks,with multimodal approaches and predictive biomarker modeling offering new opportunities for personalized care.However,broader validation,integration into clinical workflows,and attention to ethical,legal,and social implications remain critical for widespread adoption.展开更多
The detection and characterization of non-metallic inclusions are essential for clean steel production.Recently,imaging analysis combined with high-dimensional data processing of metallic materials using artificial in...The detection and characterization of non-metallic inclusions are essential for clean steel production.Recently,imaging analysis combined with high-dimensional data processing of metallic materials using artificial intelligence(AI)-based machine learning(ML)has developed rapidly.This technique has achieved impressive results in the field of inclusion classification in process metallurgy.The present study surveys the ML modeling of inclusion prediction in advanced steels,including the detection,classification,and feature prediction of inclusions in different steel grades.Studies on clean steel with different features based on data and image analysis via ML are summarized.Regarding the data analysis,the inclusion prediction methodology based on ML establishes a connection between the experimental parameters and inclusion characteristics and analyzes the importance of the experimental parameters.Regarding the image analysis,the focus is placed on the classification of different types of inclusions via deep learning,in comparison with data analysis.Finally,further development of inclusion analyses using ML-based methods is recommended.This work paves the way for the application of AIbased methodologies for ultraclean-steel studies from a sustainable metallurgy perspective.展开更多
Motor imbalance is a critical failure mode in rotating machinery,potentially causing severe equipment damage if undetected.Traditional vibration-based diagnostic methods rely on direct sensor contact,leading to instal...Motor imbalance is a critical failure mode in rotating machinery,potentially causing severe equipment damage if undetected.Traditional vibration-based diagnostic methods rely on direct sensor contact,leading to installation challenges and measurement artifacts that can compromise accuracy.This study presents a novel radar-based framework for non-contact motor imbalance detection using 24 GHz continuous-wave radar.A dataset of 1802 experimental trials was sourced,covering four imbalance levels(0,10,20,30 g)across varying motor speeds(500–1500 rpm)and load torques(0–3 Nm).Dual-channel in-phase and quadrature radar signals were captured at 10,000 samples per second for 30-s intervals,preserving both amplitude and phase information for analysis.A multi-domain feature extraction methodology captured imbalance signatures in time,frequency,and complex signal domains.From 65 initial features,statistical analysis using Kruskal–Wallis tests identified significant descriptors,and recursive feature elimination with Random Forest reduced the feature set to 20 dimensions,achieving 69%dimensionality reduction without loss of performance.Six machine learning algorithms,Random Forest,Extra Trees Classifier,Extreme Gradient Boosting,Categorical Boosting,Support Vector Machine with radial basis function kernel,and k-Nearest Neighbors were evaluated with grid-search hyperparameter optimization and five-fold cross-validation.The Extra Trees Classifier achieved the best performance with 98.52%test accuracy,98%cross-validation accuracy,and minimal variance,maintaining per-class precision and recall above 97%.Its superior performance is attributed to its randomized split selection and full bootstrapping strategy,which reduce variance and overfitting while effectively capturing the nonlinear feature interactions and non-normal distributions present in the dataset.The model’s average inference time of 70 ms enables near real-time deployment.Comparative analysis demonstrates that the radar-based framework matches or exceeds traditional contact-based methods while eliminating their inherent limitations,providing a robust,scalable,and noninvasive solution for industrial motor condition monitoring,particularly in hazardous or space-constrained environments.展开更多
基金Guangzhou Metro Scientific Research Project(No.JT204-100111-23001)Chongqing Municipal Special Project for Technological Innovation and Application Development(No.CSTB2022TIAD-KPX0101)Science and Technology Research and Development Program of China State Railway Group Co.,Ltd.(No.N2023G045)。
文摘The uplift resistance of the soil overlying shield tunnels significantly impacts their anti-floating stability.However,research on uplift resistance concerning special-shaped shield tunnels is limited.This study combines numerical simulation with machine learning techniques to explore this issue.It presents a summary of special-shaped tunnel geometries and introduces a shape coefficient.Through the finite element software,Plaxis3D,the study simulates six key parameters—shape coefficient,burial depth ratio,tunnel’s longest horizontal length,internal friction angle,cohesion,and soil submerged bulk density—that impact uplift resistance across different conditions.Employing XGBoost and ANN methods,the feature importance of each parameter was analyzed based on the numerical simulation results.The findings demonstrate that a tunnel shape more closely resembling a circle leads to reduced uplift resistance in the overlying soil,whereas other parameters exhibit the contrary effects.Furthermore,the study reveals a diminishing trend in the feature importance of buried depth ratio,internal friction angle,tunnel longest horizontal length,cohesion,soil submerged bulk density,and shape coefficient in influencing uplift resistance.
基金supported by the National Natural Science Foundation of China (42505149,41925023,U2342223,42105069,and 91744208)the China Postdoctoral Science Foundation (2025M770303)+1 种基金the Fundamental Research Funds for the Central Universities (14380230)the Jiangsu Funding Program for Excellent Postdoctoral Talent,and Jiangsu Collaborative Innovation Center of Climate Change。
文摘Countries around the world have been making efforts to reduce pollutant emissions. However, the response of global black carbon(BC) aging to emission changes remains unclear. Using the Community Atmosphere Model version 6 with a machine-learning-integrated four-mode version of the Modal Aerosol Module, we quantify global BC aging responses to emission reductions for 2011–2018 and for 2050 and 2100 under carbon neutrality. During 2011–18, global trends in BC aging degree(mass ratio of coatings to BC, R_(BC)) exhibited marked regional disparities, with a significant increase in China(5.4% yr^(-1)), which contrasts with minimal changes in the USA, Europe, and India. The divergence is attributed to opposing trends in secondary organic aerosol(SOA) and sulfate coatings, driven by regional changes in the emission ratios of corresponding coating precursors to BC(volatile organic compounds-VOCs/BC and SO_(2)/BC). Projections under carbon neutrality reveal that R_(BC) will increase globally by 47%(118%) in 2050(2100), with strong convergent increases expected across major source regions. The R_(BC) increase, primarily driven by enhanced SOA coatings due to sharper BC reductions relative to VOCs, will enhance the global BC mass absorption cross-section(MAC) by 11%(17%) in 2050(2100).Consequently, although the global BC burden will decline sharply by 60%(76%), the enhanced MAC partially offsets the magnitude of the decline in the BC direct radiative effect, resulting in the moderation of global BC DRE decreases to 88%(92%) of the BC burden reductions in 2050(2100). This study highlights the globally enhanced BC aging and light absorption capacity under carbon neutrality, thereby partly offsetting the impact of BC direct emission reductions on future changes in BC radiative effects globally.
基金supported by the China Agriculture Research System of MOF and MARAthe National Natural Science Foundation of China (31872337 and 31501919)the Agricultural Science and Technology Innovation Project,China (ASTIP-IAS02)。
文摘The advantages of genome selection(GS) in animal and plant breeding are self-evident.Traditional parametric models have disadvantage in better fit the increasingly large sequencing data and capture complex effects accurately.Machine learning models have demonstrated remarkable potential in addressing these challenges.In this study,we introduced the concept of mixed kernel functions to explore the performance of support vector machine regression(SVR) in GS.Six single kernel functions(SVR_L,SVR_C,SVR_G,SVR_P,SVR_S,SVR_L) and four mixed kernel functions(SVR_GS,SVR_GP,SVR_LS,SVR_LP) were used to predict genome breeding values.The prediction accuracy,mean squared error(MSE) and mean absolute error(MAE) were used as evaluation indicators to compare with two traditional parametric models(GBLUP,BayesB) and two popular machine learning models(RF,KcRR).The results indicate that in most cases,the performance of the mixed kernel function model significantly outperforms that of GBLUP,BayesB and single kernel function.For instance,for T1 in the pig dataset,the predictive accuracy of SVR_GS is improved by 10% compared to GBLUP,and by approximately 4.4 and 18.6% compared to SVR_G and SVR_S respectively.For E1 in the wheat dataset,SVR_GS achieves 13.3% higher prediction accuracy than GBLUP.Among single kernel functions,the Laplacian and Gaussian kernel functions yield similar results,with the Gaussian kernel function performing better.The mixed kernel function notably reduces the MSE and MAE when compared to all single kernel functions.Furthermore,regarding runtime,SVR_GS and SVR_GP mixed kernel functions run approximately three times faster than GBLUP in the pig dataset,with only a slight increase in runtime compared to the single kernel function model.In summary,the mixed kernel function model of SVR demonstrates speed and accuracy competitiveness,and the model such as SVR_GS has important application potential for GS.
基金supported by the National Natural Science Foundation of China under Grant U2442219Fengyun Satellite Application Pioneer Program(2023)Special Initiative on Numerical Weather Prediction(NWP)Applications,the Civil Aerospace Technology Pre-Research Project(D040405)the Joint Funds of the Zhejiang Provincial Natural Science Foundation of China under Grant No.LZJMZ23D050003。
文摘Accurate retrieval of atmospheric vertical profiles is critical for improving weather prediction and climate monitoring.However,the complexity of atmospheric processes in cloudy regions poses challenges compared to those of clear sky scenarios.This study presents a novel framework that integrates Bayesian optimization and machine learning approaches to retrieve atmospheric vertical profiles—including temperature,humidity,ozone concentration,cloud fraction,ice water content(IWC),and liquid water content(LWC)—from hyperspectral infrared observations.Specifically,a Bayesian method was used to refine ERA5 reanalysis data by minimizing brightness temperature(BT)discrepancies against FY-4B Geostationary Interferometric Infrared Sounder(GIIRS)observations,generating a high-quality profile database(~2.8 million profiles)across diverse weather systems.The optimized profiles improve radiative consistency,reducing BT biases from>40 K to<10 K in cloudy regions.To further overcome the limitations of the Bayesian method,we developed a Transformer-Resnet hybrid model(TERNet),which achieved superior performance with RMSE values of 1.61 K(temperature),5.77%(humidity),and 2.25×10^(–6)/6.09×10^(–6)kg kg^(–1)(IWC/LWC)across the entire vertical levels in all-sky conditions.The TERNet outperforms both ERA5 in cloud parameter retrieval and the GIIRS L2 product in thermodynamic profiling.Independent verification with radiosonde and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations(CALIPSO)datasets confirms the framework's reliability across various meteorological regimes.This work demonstrates the capability of combining physics-informed Bayesian methods with data-driven machine learning to fully exploit hyperspectral IR data.
基金financially supported by National Natural Science Foundation ofChina(No.12374405)Provincial Science Foundation for Distinguished Young Scholars of Fujian(No.2024J010024)+1 种基金Natural Science Foundation of Fujian Province of China(No.2023J011267)Major Research Projects for Young and Middle-aged Researchers of Fujian Provincial Health Commission(No.2021ZQNZD010).
文摘Nasopharyngeal carcinoma(NPC)is a malignant tumor prevalent in southern China and Southeast Asia,where its early detection is crucial for improving patient prognosis and reducing mortality rates.However,existing screening methods suffer from limitations in accuracy and accessibility,hindering their application in large-scale population screening.In this work,a surface-enhanced Raman spectroscopy(SERS)-based method was established to explore the profiles of different stratified components in saliva from NPC and healthy subjects after fractionation processing.The study findings indicate that all fractionated samples exhibit diseaseassociated molecular signaling differences,where small-molecule(molecular weight cut-offvalue is 10 kDa)demonstrating superior classification capabilities with sensitivity of 90.5%and speci-ficity of 75.6%,area under receiver operating characteristic(ROC)curve of 0:925±0:031.The primary objective of this study was to qualitatively explore patterns in saliva composition across groups.The proposed SERS detection strategy for fractionated saliva offers novel insights for enhancing the sensitivity and reliability of noninvasive NPC screening,laying the foundation for translational application in large-scale clinical settings.
基金extend their gratitude to the Deanship of Scientific Research,Vice Presidency for Graduate Studies and Scientific Research,King Faisal University,Saudi Arabia,for funding the publication of this work under the Ambitious Researcher program(Project No.KFU253806).
文摘Artificial intelligence(AI)based models have been used to predict the structural,optical,mechanical,and electrochemical properties of zinc oxide/graphene oxide nanocomposites.Machine learning(ML)models such as Artificial Neural Networks(ANN),Support Vector Regression(SVR),Multilayer Perceptron(MLP),and hybrid,along with fuzzy logic tools,were applied to predict the different properties like wavelength at maximum intensity(444 nm),crystallite size(17.50 nm),and optical bandgap(2.85 eV).While some other properties,such as energy density,power density,and charge transfer resistance,were also predicted with the help of datasets of 1000(80:20).In general,the energy parameters were predicted more accurately by hybrid models.The hydrothermal method was used to synthesize graphene oxide(GO)and zinc oxide(ZnO)nanocomposites.The increased surface area,conductivity,and stability of graphene oxide in zinc oxide nanoparticles make the composite an ideal option for energy storage.X-ray diffraction(XRD)confirmed the crystallite size of 17.41 nm for the nanocomposite and the presence of GO(12.8○)peaks.The scanning electron microscope(SEM)showed anchored wrinkled GO sheets on zinc oxide with an average particle size of 2.93μm.Energy-dispersive X-ray spectroscopy(EDX)confirmed the elemental composition,and Fouriertransform infrared spectroscopy(FTIR)revealed the impact of GO on functional groups and electrochemical behavior.Photoluminescence(PL)wavelength of(439 nm)and band gap of(2.81 eV)show that the material is suitable for energy applications in nanocomposites.Smart nanocomposite materials with improved performance in energy storage and related applications were fabricated by combining synthesis,characterization,fuzzy logic,and machine learning in this work.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R104)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively.This study introduces an advanced,explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets,which reflects real-world network behavior through a blend of normal and diverse attack classes.The methodology begins with sophisticated data preprocessing,incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions,ensuring standardized and model-ready inputs.Critical dimensionality reduction is achieved via the Harris Hawks Optimization(HHO)algorithm—a nature-inspired metaheuristic modeled on hawks’hunting strategies.HHO efficiently identifies the most informative features by optimizing a fitness function based on classification performance.Following feature selection,the SMOTE is applied to the training data to resolve class imbalance by synthetically augmenting underrepresented attack types.The stacked architecture is then employed,combining the strengths of XGBoost,SVM,and RF as base learners.This layered approach improves prediction robustness and generalization by balancing bias and variance across diverse classifiers.The model was evaluated using standard classification metrics:precision,recall,F1-score,and overall accuracy.The best overall performance was recorded with an accuracy of 99.44%for UNSW-NB15,demonstrating the model’s effectiveness.After balancing,the model demonstrated a clear improvement in detecting the attacks.We tested the model on four datasets to show the effectiveness of the proposed approach and performed the ablation study to check the effect of each parameter.Also,the proposed model is computationaly efficient.To support transparency and trust in decision-making,explainable AI(XAI)techniques are incorporated that provides both global and local insight into feature contributions,and offers intuitive visualizations for individual predictions.This makes it suitable for practical deployment in cybersecurity environments that demand both precision and accountability.
基金supported by the special fund of the National Clinical Key Specialty Construction Program[(2022)301-2305].
文摘BACKGROUND:This study aims to develop and validate a machine learning-based in-hospital mortality predictive model for acute aortic syndrome(AAS)in the emergency department(ED)and to derive a simplifi ed version suitable for rapid clinical application.METHODS:In this multi-center retrospective cohort study,AAS patient data from three hospitals were analyzed.The modeling cohort included data from the First Affiliated Hospital of Zhengzhou University and the People’s Hospital of Xinjiang Uygur Autonomous Region,with Peking University Third Hospital data serving as the external test set.Four machine learning algorithms—logistic regression(LR),multilayer perceptron(MLP),Gaussian naive Bayes(GNB),and random forest(RF)—were used to develop predictive models based on 34 early-accessible clinical variables.A simplifi ed model was then derived based on fi ve key variables(Stanford type,pericardial eff usion,asymmetric peripheral arterial pulsation,decreased bowel sounds,and dyspnea)via Least Absolute Shrinkage and Selection Operator(LASSO)regression to improve ED applicability.RESULTS:A total of 929 patients were included in the modeling cohort,and 210 were included in the external test set.Four machine learning models based on 34 clinical variables were developed,achieving internal and external validation AUCs of 0.85-0.90 and 0.73-0.85,respectively.The simplifi ed model incorporating fi ve key variables demonstrated internal and external validation AUCs of 0.71-0.86 and 0.75-0.78,respectively.Both models showed robust calibration and predictive stability across datasets.CONCLUSION:Both kinds of models were built based on machine learning tools,and proved to have certain prediction performance and extrapolation.
文摘This study investigates the uncertain dynamic characterization of hybrid composite plates by employing advanced machine-assisted finite element methodologies.Hybrid composites,widely used in aerospace,automotive,and structural applications,often face variability in material properties,geometric configurations,and manufacturing processes,leading to uncertainty in their dynamic response.To address this,three surrogate-based machine learning approaches like radial basis function(RBF),multivariate adaptive regression splines(MARS),and polynomial neural networks(PNN)are integrated with a finite element framework to efficiently capture the stochastic behavior of these plates.The research focuses on predicting the first three natural frequencies under material uncertainties,which are critical to ensuring structural reliability.Monte Carlo simulation(MCS)is used as a benchmark for generating probabilistic datasets,including mean values,standard deviations,and probability density functions.The surrogate models are then trained and validated against these datasets,enabling accurate representation of uncertainty with substantially fewer samples compared to conventionalMCS.Among the methods studied,the RBFmodel demonstrates superior performance,closely approximating MCS results with a reduced sample size,thereby achieving significant computational savings.The proposed framework not only reduces computational time and costs but also maintains high predictive accuracy,making it well-suited for complex engineering systems.Beyond free vibration analysis,the methodology can be extended to more sophisticated scenarios,such as forced vibration,damping effects,and nonlinear structural responses.Overall,this work presents a computationally efficient and robust approach for surrogate-based uncertainty quantification,advancing the analysis and design of hybrid composite structures under uncertainty.
基金supported by Iran National Science Foundation(INSF)under project No.4022382Facilities were provided by the Condensed Matter National Laboratory at the Institute for Research in Fundamental Sciences(IPM)in Tehran,Iran.Additionally,financial support for equipment purchase was granted by the INSF under project number 4022382.
文摘The rational design of high-performance electrochemical energy storage devices critically depends on a fundamental understanding of ion-electrode interactions at the molecular scale.Herein,we employ interpretable machine learning(ML)to reveal electrolyte hydration energy as a universal descriptor governing ion-specific capacitance in two-dimensional(2D)materials.Through explainable ML,we elucidate how ion hydration shell stability and size critically influence charge transport and storage at the electrode-electrolyte interface.Our analysis identifies hydration energy-not ionic size-as the primary factor dictating capacitance,challenging prevailing assumptions and providing quantifiable design rules for electrolyte selection.These insights offer a data-driven pathway to optimize 2D materials for supercapacitors and beyond,including batteries and electrocatalytic systems.This work demonstrates the power of explainable artificial intelligence in uncovering molecular-level mechanisms that accelerate the discovery and development of next-generation energy storage technologies.
基金Project(42077244)supported by the National Natural Science Foundation of ChinaProject(2020-05)supported by the Open Research Fund of Guangdong Provincial Key Laboratory of Deep Earth Sciences and Geothermal Energy Exploitation and Utilization,China。
文摘Accurate prediction of rockburst intensity levels is crucial for ensuring the safety of deep hard rock engineering construction.This paper introduced an expert system for rockburst intensity level prediction that employs machine learning algorithms as the basis for its inference rules.The system comprises four modules:a database,a repository,an inference engine,and an interpreter.A database containing 1114 rockburst cases was used to construct 357 datasets that serve as the repository for the expert system.Additionally,19 types of machine learning algorithms were used to establish 6783 micro-models to construct cognitive rules within the inference engine.By integrating probability theory and marginal analysis,a fuzzy scoring method based on the SoftMax function was developed and applied to the interpreter for rockburst intensity level prediction,effectively restoring the continuity of rockburst characteristics.The research results indicate that ensemble algorithms based on decision trees are more effective in capturing the characteristics of rockburst.Key factors for accurate prediction of rockburst intensity include uniaxial compressive strength,elastic energy index,the maximum principal stress,tangential stress,and their composite indicators.The accuracy of the proposed rockburst intensity level prediction expert system was verified using 20 engineering rockburst cases,with predictions aligning closely with the actual rockburst intensity levels.
基金The National Key Research and Development Program of China,No.2023YFC3206601。
文摘Landslides pose a formidable natural hazard across the Qinghai-Tibet Plateau(QTP),endangering both ecosystems and human life.Identifying the driving factors behind landslides and accurately assessing susceptibility are key to mitigating disaster risk.This study integrated multi-source historical landslide data with 15 predictive factors and used several machine learning models—Random Forest(RF),Gradient Boosting Regression Trees(GBRT),Extreme Gradient Boosting(XGBoost),and Categorical Boosting(CatBoost)—to generate susceptibility maps.The Shapley additive explanation(SHAP)method was applied to quantify factor importance and explore their nonlinear effects.The results showed that:(1)CatBoost was the best-performing model(CA=0.938,AUC=0.980)in assessing landslide susceptibility,with altitude emerging as the most significant factor,followed by distance to roads and earthquake sites,precipitation,and slope;(2)the SHAP method revealed critical nonlinear thresholds,demonstrating that historical landslides were concentrated at mid-altitudes(1400-4000 m)and decreased markedly above 4000 m,with a parallel reduction in probability beyond 700 m from roads;and(3)landslide-prone areas,comprising 13%of the QTP,were concentrated in the southeastern and northeastern parts of the plateau.By integrating machine learning and SHAP analysis,this study revealed landslide hazard-prone areas and their driving factors,providing insights to support disaster management strategies and sustainable regional planning.
文摘Accurate prediction of concrete compressive strength is fundamental for optimizing mix designs,improving material utilization,and ensuring structural safety in modern construction.Traditional empirical methods often fail to capture the non-linear relationships among concrete constituents,especially with the growing use of supple-mentary cementitious materials and recycled aggregates.This study presents an integrated machine learning framework for concrete strength prediction,combining advanced regression models—namely CatBoost—with metaheuristic optimization algorithms,with a particular focus on the Somersaulting Spider Optimizer(SSO).A comprehensive dataset encompassing diverse mix proportions and material types was used to evaluate baseline machine learning models,including CatBoost,XGBoost,ExtraTrees,and RandomForest.Among these,CatBoost demonstrated superior accuracy across multiple performance metrics.To further enhance predictive capability,several bio-inspired optimizers were employed for hyperparameter tuning.The SSO-CatBoost hybrid achieved the lowest mean squared error and highest correlation coefficients,outperforming other metaheuristic approaches such as Genetic Algorithm,Particle Swarm Optimization,and Grey Wolf Optimizer.Statistical significance was established through Analysis of Variance and Wilcoxon signed-rank testing,confirming the robustness of the optimized models.The proposed methodology not only delivers improved predictive performance but also offers a transparent framework for mix design optimization,supporting data-driven decision making in sustainable and resilient infrastructure development.
文摘Objective:The increasing global prevalence of mental health disorders highlights the urgent need for the development of innovative diagnostic methods.Conditions such as anxiety,depression,stress,bipolar disorder(BD),and autism spectrum disorder(ASD)frequently arise from the complex interplay of demographic,biological,and socioeconomic factors,resulting in aggravated symptoms.This review investigates machine intelligence approaches for the early detection and prediction of mental health conditions.Methods:The preferred reporting items for systematic reviews and meta-analyses(PRISMA)framework was employed to conduct a systematic review and analysis covering the period 2018 to 2025.The potential impact of machine intelligence methods was assessed by considering various strategies,hybridization of algorithms,tools,techniques,and datasets,and their applicability.Results:Through a systematic review of studies concentrating on the prediction and evaluation of mental disorders using machine intelligence algorithms,advancements,limitations,and gaps in current methodologies were highlighted.The datasets and tools utilized in these investigations were examined,offering a detailed overview of the status of computational models in understanding and diagnosing mental health disorders.Recent research indicated considerable improvements in diagnostic accuracy and treatment effectiveness,particularly for depression and anxiety,which have shown the greatest methodological diversity and notable advancements in machine intelligence.Conclusions:Despite these improvements,challenges persist,including the need for more diverse datasets,ethical issues surrounding data privacy and algorithmic bias,and obstacles to integrating these technologies into clinical settings.This synthesis emphasizes the transformative potential of machine intelligence in enhancing mental healthcare.
基金financially supported by the National Key Research and Development Project of China(No.2022YFB3806900)。
文摘The complex interactions and conflicting performance demands in multi-component composites pose significant challenges for achieving balanced multi-property optimization through conventional trial-and-error approaches.Machine learning(ML)offers a promising solution,markedly improving materials discovery efficiency.However,the high dimensionality of feature spaces in such systems has long impeded effective ML-driven feature representation and inverse design.To overcome this,we present an Intelligent Screening System(ISS)framework to accelerate the discovery of optimal formulations balancing four key properties in 15-component PTFE-based copper-clad laminate composites(PTFE-CCLCs).ISS adopts modular descriptors based on the physical information of component volume fractions,thereby simplifying the feature representation.By leveraging the inverse prediction capability of ML models and constructing a performance-driven virtual candidate database,ISS significantly reduced the computational complexity associated with high-dimensional spaces.Experimental validation confirmed that ISSoptimized formulations exhibited superior synergy,notably resolving the trade-off between thermal conductivity and peel strength,and outperform many commercial counterparts.Despite limited data and inherent process variability,ISS achieved an average prediction accuracy of 76.5%,with thermal conductivity predictions exceeding 90%,demonstrating robust reliability.This work provides an innovative,efficient strategy for multifunctional optimization and accelerated discovery in ultra-complex composite systems,highlighting the integration of ML and advanced materials design.
文摘Post-kidney transplant rejection is a critical factor influencing transplant success rates and the survival of transplanted organs.With the rapid advancement of artificial intelligence technologies,machine learning(ML)has emerged as a powerful data analysis tool,widely applied in the prediction,diagnosis,and mechanistic study of kidney transplant rejection.This mini-review systematically summarizes the recent applications of ML techniques in post-kidney transplant rejection,covering areas such as the construction of predictive models,identification of biomarkers,analysis of pathological images,assessment of immune cell infiltration,and formulation of personalized treatment strategies.By integrating multi-omics data and clinical information,ML has significantly enhanced the accuracy of early rejection diagnosis and the capability for prognostic evaluation,driving the development of precision medicine in the field of kidney transplantation.Furthermore,this article discusses the challenges faced in existing research and potential future directions,providing a theoretical basis and technical references for related studies.
文摘Delayed wound healing following radical gastrectomy remains an important yet underappreciated complication that prolongs hospitalization,increases costs,and undermines patient recovery.In An et al’s recent study,the authors present a machine learning-based risk prediction approach using routinely available clinical and laboratory parameters.Among the evaluated algorithms,a decision tree model demonstrated excellent discrimination,achieving an area under the curve of 0.951 in the validation set and notably identifying all true cases of delayed wound healing at the Youden index threshold.The inclusion of variables such as drainage duration,preoperative white blood cell and neutrophil counts,alongside age and sex,highlights the pragmatic appeal of the model for early postoperative monitoring.Nevertheless,several aspects warrant critical reflection,including the reliance on a postoperative variable(drainage duration),internal validation only,and certain reporting inconsistencies.This letter underscores both the promise and the limitations of adopting interpretable machine learning models in perioperative care.We advocate for transparent reporting,external validation,and careful consideration of clinically actionable timepoints before integration into practice.Ultimately,this work represents a valuable step toward precision risk stratification in gastric cancer surgery,and sets the stage for multicenter,prospective evaluations.
文摘Gastrointestinal(GI)cancers remain a leading cause of cancer-related morbidity and mortality worldwide.Artificial intelligence(AI),particularly machine learning and deep learning(DL),has shown promise in enhancing cancer detection,diagnosis,and prognostication.A narrative review of literature published from January 2015 to march 2025 was conducted using PubMed,Web of Science,and Scopus.Search terms included"gastrointestinal cancer","artificial intelligence","machine learning","deep learning","radiomics","multimodal detection"and"predictive modeling".Studies were included if they focused on clinically relevant AI applications in GI oncology.AI algorithms for GI cancer detection have achieved high performance across imaging modalities,with endoscopic DL systems reporting accuracies of 85%-97%for polyp detection and segmentation.Radiomics-based models have predicted molecular biomarkers such as programmed cell death ligand 2 expression with area under the curves up to 0.92.Large language models applied to radiology reports demonstrated diagnostic accuracy comparable to junior radiologists(78.9%vs 80.0%),though without incremental value when combined with human interpretation.Multimodal AI approaches integrating imaging,pathology,and clinical data show emerging potential for precision oncology.AI in GI oncology has reached clinically relevant accuracy levels in multiple diagnostic tasks,with multimodal approaches and predictive biomarker modeling offering new opportunities for personalized care.However,broader validation,integration into clinical workflows,and attention to ethical,legal,and social implications remain critical for widespread adoption.
基金support from the National Key Research and Development Program of China(No.2024YFB3713705)is acknowledgedWangzhong Mu would like to acknowledge the Strategic Mobility,Sweden(SSF,No.SM22-0039)+1 种基金the Swedish Foundation for International Cooperation in Research and Higher Education(STINT,No.IB2022-9228)the Jernkontoret(Sweden)for supporting this clean steel research.Gonghao Lian would like to acknowledge China Scholarship Council(CSC,No.202306080032).
文摘The detection and characterization of non-metallic inclusions are essential for clean steel production.Recently,imaging analysis combined with high-dimensional data processing of metallic materials using artificial intelligence(AI)-based machine learning(ML)has developed rapidly.This technique has achieved impressive results in the field of inclusion classification in process metallurgy.The present study surveys the ML modeling of inclusion prediction in advanced steels,including the detection,classification,and feature prediction of inclusions in different steel grades.Studies on clean steel with different features based on data and image analysis via ML are summarized.Regarding the data analysis,the inclusion prediction methodology based on ML establishes a connection between the experimental parameters and inclusion characteristics and analyzes the importance of the experimental parameters.Regarding the image analysis,the focus is placed on the classification of different types of inclusions via deep learning,in comparison with data analysis.Finally,further development of inclusion analyses using ML-based methods is recommended.This work paves the way for the application of AIbased methodologies for ultraclean-steel studies from a sustainable metallurgy perspective.
基金funded by Princess Nourah bint Abdulrahman University Researchers Support-ing Project number(PNURSP2026R346)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Motor imbalance is a critical failure mode in rotating machinery,potentially causing severe equipment damage if undetected.Traditional vibration-based diagnostic methods rely on direct sensor contact,leading to installation challenges and measurement artifacts that can compromise accuracy.This study presents a novel radar-based framework for non-contact motor imbalance detection using 24 GHz continuous-wave radar.A dataset of 1802 experimental trials was sourced,covering four imbalance levels(0,10,20,30 g)across varying motor speeds(500–1500 rpm)and load torques(0–3 Nm).Dual-channel in-phase and quadrature radar signals were captured at 10,000 samples per second for 30-s intervals,preserving both amplitude and phase information for analysis.A multi-domain feature extraction methodology captured imbalance signatures in time,frequency,and complex signal domains.From 65 initial features,statistical analysis using Kruskal–Wallis tests identified significant descriptors,and recursive feature elimination with Random Forest reduced the feature set to 20 dimensions,achieving 69%dimensionality reduction without loss of performance.Six machine learning algorithms,Random Forest,Extra Trees Classifier,Extreme Gradient Boosting,Categorical Boosting,Support Vector Machine with radial basis function kernel,and k-Nearest Neighbors were evaluated with grid-search hyperparameter optimization and five-fold cross-validation.The Extra Trees Classifier achieved the best performance with 98.52%test accuracy,98%cross-validation accuracy,and minimal variance,maintaining per-class precision and recall above 97%.Its superior performance is attributed to its randomized split selection and full bootstrapping strategy,which reduce variance and overfitting while effectively capturing the nonlinear feature interactions and non-normal distributions present in the dataset.The model’s average inference time of 70 ms enables near real-time deployment.Comparative analysis demonstrates that the radar-based framework matches or exceeds traditional contact-based methods while eliminating their inherent limitations,providing a robust,scalable,and noninvasive solution for industrial motor condition monitoring,particularly in hazardous or space-constrained environments.