Post-kidney transplant rejection is a critical factor influencing transplant success rates and the survival of transplanted organs.With the rapid advancement of artificial intelligence technologies,machine learning(ML...Post-kidney transplant rejection is a critical factor influencing transplant success rates and the survival of transplanted organs.With the rapid advancement of artificial intelligence technologies,machine learning(ML)has emerged as a powerful data analysis tool,widely applied in the prediction,diagnosis,and mechanistic study of kidney transplant rejection.This mini-review systematically summarizes the recent applications of ML techniques in post-kidney transplant rejection,covering areas such as the construction of predictive models,identification of biomarkers,analysis of pathological images,assessment of immune cell infiltration,and formulation of personalized treatment strategies.By integrating multi-omics data and clinical information,ML has significantly enhanced the accuracy of early rejection diagnosis and the capability for prognostic evaluation,driving the development of precision medicine in the field of kidney transplantation.Furthermore,this article discusses the challenges faced in existing research and potential future directions,providing a theoretical basis and technical references for related studies.展开更多
Delayed wound healing following radical gastrectomy remains an important yet underappreciated complication that prolongs hospitalization,increases costs,and undermines patient recovery.In An et al’s recent study,the ...Delayed wound healing following radical gastrectomy remains an important yet underappreciated complication that prolongs hospitalization,increases costs,and undermines patient recovery.In An et al’s recent study,the authors present a machine learning-based risk prediction approach using routinely available clinical and laboratory parameters.Among the evaluated algorithms,a decision tree model demonstrated excellent discrimination,achieving an area under the curve of 0.951 in the validation set and notably identifying all true cases of delayed wound healing at the Youden index threshold.The inclusion of variables such as drainage duration,preoperative white blood cell and neutrophil counts,alongside age and sex,highlights the pragmatic appeal of the model for early postoperative monitoring.Nevertheless,several aspects warrant critical reflection,including the reliance on a postoperative variable(drainage duration),internal validation only,and certain reporting inconsistencies.This letter underscores both the promise and the limitations of adopting interpretable machine learning models in perioperative care.We advocate for transparent reporting,external validation,and careful consideration of clinically actionable timepoints before integration into practice.Ultimately,this work represents a valuable step toward precision risk stratification in gastric cancer surgery,and sets the stage for multicenter,prospective evaluations.展开更多
Gastrointestinal(GI)cancers remain a leading cause of cancer-related morbidity and mortality worldwide.Artificial intelligence(AI),particularly machine learning and deep learning(DL),has shown promise in enhancing can...Gastrointestinal(GI)cancers remain a leading cause of cancer-related morbidity and mortality worldwide.Artificial intelligence(AI),particularly machine learning and deep learning(DL),has shown promise in enhancing cancer detection,diagnosis,and prognostication.A narrative review of literature published from January 2015 to march 2025 was conducted using PubMed,Web of Science,and Scopus.Search terms included"gastrointestinal cancer","artificial intelligence","machine learning","deep learning","radiomics","multimodal detection"and"predictive modeling".Studies were included if they focused on clinically relevant AI applications in GI oncology.AI algorithms for GI cancer detection have achieved high performance across imaging modalities,with endoscopic DL systems reporting accuracies of 85%-97%for polyp detection and segmentation.Radiomics-based models have predicted molecular biomarkers such as programmed cell death ligand 2 expression with area under the curves up to 0.92.Large language models applied to radiology reports demonstrated diagnostic accuracy comparable to junior radiologists(78.9%vs 80.0%),though without incremental value when combined with human interpretation.Multimodal AI approaches integrating imaging,pathology,and clinical data show emerging potential for precision oncology.AI in GI oncology has reached clinically relevant accuracy levels in multiple diagnostic tasks,with multimodal approaches and predictive biomarker modeling offering new opportunities for personalized care.However,broader validation,integration into clinical workflows,and attention to ethical,legal,and social implications remain critical for widespread adoption.展开更多
Oxide dispersion strengthened(ODS)alloys are extensively used owing to high thermostability and creep strength contributed from uniformly dispersed fine oxides particles.However,the existence of these strengthening pa...Oxide dispersion strengthened(ODS)alloys are extensively used owing to high thermostability and creep strength contributed from uniformly dispersed fine oxides particles.However,the existence of these strengthening particles also deteriorates the processability and it is of great importance to establish accurate processing maps to guide the thermomechanical processes to enhance the formability.In this study,we performed particle swarm optimization-based back propagation artificial neural network model to predict the high temperature flow behavior of 0.25wt%Al2O3 particle-reinforced Cu alloys,and compared the accuracy with that of derived by Arrhenius-type constitutive model and back propagation artificial neural network model.To train these models,we obtained the raw data by fabricating ODS Cu alloys using the internal oxidation and reduction method,and conducting systematic hot compression tests between 400 and800℃with strain rates of 10^(-2)-10 S^(-1).At last,processing maps for ODS Cu alloys were proposed by combining processing parameters,mechanical behavior,microstructure characterization,and the modeling results achieved a coefficient of determination higher than>99%.展开更多
Crystal structure prediction(CSP)is a foundational computational technique for determining the atomic arrangements of crystalline materials,especially under high-pressure conditions.While CSP plays a critical role in ...Crystal structure prediction(CSP)is a foundational computational technique for determining the atomic arrangements of crystalline materials,especially under high-pressure conditions.While CSP plays a critical role in materials science,traditional approaches often encounter significant challenges related to computational efficiency and scalability,particularly when applied to complex systems.Recent advances in machine learning(ML)have shown tremendous promise in addressing these limitations,enabling the rapid and accurate prediction of crystal structures across a wide range of chemical compositions and external conditions.This review provides a concise overview of recent progress in ML-assisted CSP methodologies,with a particular focus on machine learning potentials and generative models.By critically analyzing these advances,we highlight the transformative impact of ML in accelerating materials discovery,enhancing computational efficiency,and broadening the applicability of CSP.Additionally,we discuss emerging opportunities and challenges in this rapidly evolving field.展开更多
Maintaining high groundwater level(GWL)is important for preventing fires in peatlands.This study proposes GWL prediction using machine learning methods for forest plantations in Indonesian tropical peatlands.Deep neur...Maintaining high groundwater level(GWL)is important for preventing fires in peatlands.This study proposes GWL prediction using machine learning methods for forest plantations in Indonesian tropical peatlands.Deep neural networks(DNN)have been used for prediction;however,they have not been applied to groundwater prediction in Indonesian peatlands.Tropical peatland is characterized by high permeability and forest plantations are surrounded by several canals.By predicting daily differences in GWL,the GWL can be predicted with high accuracy.DNNs,random forests,support vector regression,and XGBoost were compared,all of which indicated similar errors.The SHAP value revealed that the precipitation falling on the hill rapidly seeps into the soil and flows into the canals,which agrees with the fact that the soil has high permeability.These findings can potentially be used to alleviate and manage future fires in peatlands.展开更多
The rapid growth of machine learning(ML)across fields has intensified the challenge of selecting the right algorithm for specific tasks,known as the Algorithm Selection Problem(ASP).Traditional trial-and-error methods...The rapid growth of machine learning(ML)across fields has intensified the challenge of selecting the right algorithm for specific tasks,known as the Algorithm Selection Problem(ASP).Traditional trial-and-error methods have become impractical due to their resource demands.Automated Machine Learning(AutoML)systems automate this process,but often neglect the group structures and sparsity in meta-features,leading to inefficiencies in algorithm recommendations for classification tasks.This paper proposes a meta-learning approach using Multivariate Sparse Group Lasso(MSGL)to address these limitations.Our method models both within-group and across-group sparsity among meta-features to manage high-dimensional data and reduce multicollinearity across eight meta-feature groups.The Fast Iterative Shrinkage-Thresholding Algorithm(FISTA)with adaptive restart efficiently solves the non-smooth optimization problem.Empirical validation on 145 classification datasets with 17 classification algorithms shows that our meta-learning method outperforms four state-of-the-art approaches,achieving 77.18%classification accuracy,86.07%recommendation accuracy and 88.83%normalized discounted cumulative gain.展开更多
NJmat is a user-friendly,data-driven machine learning interface designed for materials design and analysis.The platform integrates advanced computational techniques,including natural language processing(NLP),large lan...NJmat is a user-friendly,data-driven machine learning interface designed for materials design and analysis.The platform integrates advanced computational techniques,including natural language processing(NLP),large language models(LLM),machine learning potentials(MLP),and graph neural networks(GNN),to facili-tate materials discovery.The platform has been applied in diverse materials research areas,including perovskite surface design,catalyst discovery,battery materials screening,structural alloy design,and molecular informatics.By automating feature selection,predictive modeling,and result interpretation,NJmat accelerates the development of high-performance materials across energy storage,conversion,and structural applications.Additionally,NJmat serves as an educational tool,allowing students and researchers to apply machine learning techniques in materials science with minimal coding expertise.Through automated feature extraction,genetic algorithms,and interpretable machine learning models,NJmat simplifies the workflow for materials informatics,bridging the gap between AI and experimental materials research.The latest version(available at https://figshare.com/articles/software/NJmatML/24607893(accessed on 01 January 2025))enhances its functionality by incorporating NJmatNLP,a module leveraging language models like MatBERT and those based on Word2Vec to support materials prediction tasks.By utilizing clustering and cosine similarity analysis with UMAP visualization,NJmat enables intuitive exploration of materials datasets.While NJmat primarily focuses on structure-property relationships and the discovery of novel chemistries,it can also assist in optimizing processing conditions when relevant parameters are included in the training data.By providing an accessible,integrated environment for machine learning-driven materials discovery,NJmat aligns with the objectives of the Materials Genome Initiative and promotes broader adoption of AI techniques in materials science.展开更多
Tian et al present a timely machine learning(ML)model integrating biochemical and novel traditional Chinese medicine(TCM)indicators(tongue edge redness,greasy coating)to predict hepatic steatosis in high metabolic ris...Tian et al present a timely machine learning(ML)model integrating biochemical and novel traditional Chinese medicine(TCM)indicators(tongue edge redness,greasy coating)to predict hepatic steatosis in high metabolic risk patients.Their prospective cohort design and dual-feature selection(LASSO+RFE)culminating in an interpretable XGBoost model(area under the curve:0.82)represent a significant methodological advance.The inclusion of TCM diagnostics addresses metabolic dysfunction-associated fatty liver disease(MAFLD’s)multisystem heterogeneity-a key strength that bridges holistic medicine with precision analytics and underscores potential cost savings over imaging-dependent screening.However,critical limitations impede clinical translation.First,the model’s singlecenter validation(n=711)lacks external/generalizability testing across diverse populations,risking bias from local demographics.Second,MAFLD subtyping(e.g.,lean MAFLD,diabetic MAFLD)was omitted despite acknowledged disease heterogeneity;this overlooks distinct pathophysiologies and may limit utility in stratified care.Third,while TCM features ranked among the top predictors in SHAP analysis,their clinical interpretability remains nebulous without mechanistic links to metabolic dysregulation.To resolve these gaps,we propose external validation in multiethnic cohorts using the published feature set(e.g.,aspartate aminotransferase/alanine aminotransferase,low-density lipoprotein cholesterol,TCM tongue markers)to assess robustness.Subtype-specific modeling to capture MAFLD heterogeneity,potentially enhancing accuracy in highrisk subgroups.Probing TCM microbiome/metabolomic correlations to ground tongue phenotypes in biological pathways,elevating model credibility.Despite shortcomings,this work pioneers a low-cost screening paradigm.Future iterations addressing these issues could revolutionize early MAFLD detection in resource-limited settings.展开更多
To better understand the migration behavior of plastic fragments in the environment,development of rapid non-destructive methods for in-situ identification and characterization of plastic fragments is necessary.Howeve...To better understand the migration behavior of plastic fragments in the environment,development of rapid non-destructive methods for in-situ identification and characterization of plastic fragments is necessary.However,most of the studies had focused only on colored plastic fragments,ignoring colorless plastic fragments and the effects of different environmental media(backgrounds),thus underestimating their abundance.To address this issue,the present study used near-infrared spectroscopy to compare the identification of colored and colorless plastic fragments based on partial least squares-discriminant analysis(PLS-DA),extreme gradient boost,support vector machine and random forest classifier.The effects of polymer color,type,thickness,and background on the plastic fragments classification were evaluated.PLS-DA presented the best and most stable outcome,with higher robustness and lower misclassification rate.All models frequently misinterpreted colorless plastic fragments and its background when the fragment thickness was less than 0.1mm.A two-stage modeling method,which first distinguishes the plastic types and then identifies colorless plastic fragments that had been misclassified as background,was proposed.The method presented an accuracy higher than 99%in different backgrounds.In summary,this study developed a novel method for rapid and synchronous identification of colored and colorless plastic fragments under complex environmental backgrounds.展开更多
The presence of aluminum(Al^(3+))and fluoride(F^(−))ions in the environment can be harmful to ecosystems and human health,highlighting the need for accurate and efficient monitoring.In this paper,an innovative approac...The presence of aluminum(Al^(3+))and fluoride(F^(−))ions in the environment can be harmful to ecosystems and human health,highlighting the need for accurate and efficient monitoring.In this paper,an innovative approach is presented that leverages the power of machine learning to enhance the accuracy and efficiency of fluorescence-based detection for sequential quantitative analysis of aluminum(Al^(3+))and fluoride(F^(−))ions in aqueous solutions.The proposed method involves the synthesis of sulfur-functionalized carbon dots(C-dots)as fluorescence probes,with fluorescence enhancement upon interaction with Al^(3+)ions,achieving a detection limit of 4.2 nmol/L.Subsequently,in the presence of F^(−)ions,fluorescence is quenched,with a detection limit of 47.6 nmol/L.The fingerprints of fluorescence images are extracted using a cross-platform computer vision library in Python,followed by data preprocessing.Subsequently,the fingerprint data is subjected to cluster analysis using the K-means model from machine learning,and the average Silhouette Coefficient indicates excellent model performance.Finally,a regression analysis based on the principal component analysis method is employed to achieve more precise quantitative analysis of aluminum and fluoride ions.The results demonstrate that the developed model excels in terms of accuracy and sensitivity.This groundbreaking model not only showcases exceptional performance but also addresses the urgent need for effective environmental monitoring and risk assessment,making it a valuable tool for safeguarding our ecosystems and public health.展开更多
Excellent detonation performances and low sensitivity are prerequisites for the deployment of energetic materials.Exploring the underlying factors that affect impact sensitivity and detonation performances as well as ...Excellent detonation performances and low sensitivity are prerequisites for the deployment of energetic materials.Exploring the underlying factors that affect impact sensitivity and detonation performances as well as exploring how to obtain materials with desired properties remains a long-term challenge.Machine learning with its ability to solve complex tasks and perform robust data processing can reveal the relationship between performance and descriptive indicators,potentially accelerating the development process of energetic materials.In this background,impact sensitivity,detonation performances,and 28 physicochemical parameters for 222 energetic materials from density functional theory calculations and published literature were sorted out.Four machine learning algorithms were employed to predict various properties of energetic materials,including impact sensitivity,detonation velocity,detonation pressure,and Gurney energy.Analysis of Pearson coefficients and feature importance showed that the heat of explosion,oxygen balance,decomposition products,and HOMO energy levels have a strong correlation with the impact sensitivity of energetic materials.Oxygen balance,decomposition products,and density have a strong correlation with detonation performances.Utilizing impact sensitivity of 2,3,4-trinitrotoluene and the detonation performances of 2,4,6-trinitrobenzene-1,3,5-triamine as the benchmark,the analysis of feature importance rankings and statistical data revealed the optimal range of key features balancing impact sensitivity and detonation performances:oxygen balance values should be between-40%and-30%,density should range from 1.66 to 1.72 g/cm^(3),HOMO energy levels should be between-6.34 and-6.31 eV,and lipophilicity should be between-1.0 and 0.1,4.49 and 5.59.These findings not only offer important insights into the impact sensitivity and detonation performances of energetic materials,but also provide a theoretical guidance paradigm for the design and development of new energetic materials with optimal detonation performances and reduced sensitivity.展开更多
With the rapid development of artificial intelligence,magnetocaloric materials as well as other materials are being developed with increased efficiency and enhanced performance.However,most studies do not take phase t...With the rapid development of artificial intelligence,magnetocaloric materials as well as other materials are being developed with increased efficiency and enhanced performance.However,most studies do not take phase transitions into account,and as a result,the predictions are usually not accurate enough.In this context,we have established an explicable relationship between alloy compositions and phase transition by feature imputation.A facile machine learning is proposed to screen candidate NiMn-based Heusler alloys with desired magnetic entropy change and magnetic transition temperature with a high accuracy R^(2)≈0.98.As expected,the measured properties of prepared NiMn-based alloys,including phase transition type,magnetic entropy changes and transition temperature,are all in good agreement with the ML predictions.As well as being the first to demonstrate an explicable relationship between alloy compositions,phase transitions and magnetocaloric properties,our proposed ML model is highly predictive and interpretable,which can provide a strong theoretical foundation for identifying high-performance magnetocaloric materials in the future.展开更多
BACKGROUND Colorectal polyps are precancerous diseases of colorectal cancer.Early detection and resection of colorectal polyps can effectively reduce the mortality of colorectal cancer.Endoscopic mucosal resection(EMR...BACKGROUND Colorectal polyps are precancerous diseases of colorectal cancer.Early detection and resection of colorectal polyps can effectively reduce the mortality of colorectal cancer.Endoscopic mucosal resection(EMR)is a common polypectomy proce-dure in clinical practice,but it has a high postoperative recurrence rate.Currently,there is no predictive model for the recurrence of colorectal polyps after EMR.AIM To construct and validate a machine learning(ML)model for predicting the risk of colorectal polyp recurrence one year after EMR.METHODS This study retrospectively collected data from 1694 patients at three medical centers in Xuzhou.Additionally,a total of 166 patients were collected to form a prospective validation set.Feature variable screening was conducted using uni-variate and multivariate logistic regression analyses,and five ML algorithms were used to construct the predictive models.The optimal models were evaluated based on different performance metrics.Decision curve analysis(DCA)and SHapley Additive exPlanation(SHAP)analysis were performed to assess clinical applicability and predictor importance.RESULTS Multivariate logistic regression analysis identified 8 independent risk factors for colorectal polyp recurrence one year after EMR(P<0.05).Among the models,eXtreme Gradient Boosting(XGBoost)demonstrated the highest area under the curve(AUC)in the training set,internal validation set,and prospective validation set,with AUCs of 0.909(95%CI:0.89-0.92),0.921(95%CI:0.90-0.94),and 0.963(95%CI:0.94-0.99),respectively.DCA indicated favorable clinical utility for the XGBoost model.SHAP analysis identified smoking history,family history,and age as the top three most important predictors in the model.CONCLUSION The XGBoost model has the best predictive performance and can assist clinicians in providing individualized colonoscopy follow-up recommendations.展开更多
Finding materials with specific properties is a hot topic in materials science.Traditional materials design relies on empirical and trial-and-error methods,requiring extensive experiments and time,resulting in high co...Finding materials with specific properties is a hot topic in materials science.Traditional materials design relies on empirical and trial-and-error methods,requiring extensive experiments and time,resulting in high costs.With the development of physics,statistics,computer science,and other fields,machine learning offers opportunities for systematically discovering new materials.Especially through machine learning-based inverse design,machine learning algorithms analyze the mapping relationships between materials and their properties to find materials with desired properties.This paper first outlines the basic concepts of materials inverse design and the challenges faced by machine learning-based approaches to materials inverse design.Then,three main inverse design methods—exploration-based,model-based,and optimization-based—are analyzed in the context of different application scenarios.Finally,the applications of inverse design methods in alloys,optical materials,and acoustic materials are elaborated on,and the prospects for materials inverse design are discussed.The authors hope to accelerate the discovery of new materials and provide new possibilities for advancing materials science and innovative design methods.展开更多
Carbon emissions resulting from energy consumption have become a pressing issue for governments worldwide.Accurate estimation of carbon emissions using satellite remote sensing data has become a crucial research probl...Carbon emissions resulting from energy consumption have become a pressing issue for governments worldwide.Accurate estimation of carbon emissions using satellite remote sensing data has become a crucial research problem.Previous studies relied on statistical regression models that failed to capture the complex nonlinear relationships between carbon emissions and characteristic variables.In this study,we propose a machine learning algorithm for carbon emissions,a Bayesian optimized XGboost regression model,using multi-year energy carbon emission data and nighttime lights(NTL)remote sensing data from Shaanxi Province,China.Our results demonstrate that the XGboost algorithm outperforms linear regression and four other machine learning models,with an R^(2)of 0.906 and RMSE of 5.687.We observe an annual increase in carbon emissions,with high-emission counties primarily concentrated in northern and central Shaanxi Province,displaying a shift from discrete,sporadic points to contiguous,extended spatial distribution.Spatial autocorrelation clustering reveals predominantly high-high and low-low clustering patterns,with economically developed counties showing high-emission clustering and economically relatively backward counties displaying low-emission clustering.Our findings show that the use of NTL data and the XGboost algorithm can estimate and predict carbon emissionsmore accurately and provide a complementary reference for satellite remote sensing image data to serve carbon emission monitoring and assessment.This research provides an important theoretical basis for formulating practical carbon emission reduction policies and contributes to the development of techniques for accurate carbon emission estimation using remote sensing data.展开更多
Colon cancer is one of the malignant tumors with high morbidity and mortality worldwide[1],and its early diagnosis is crucial for improving patient survival.However,due to the lack of obvious early symptoms of colon c...Colon cancer is one of the malignant tumors with high morbidity and mortality worldwide[1],and its early diagnosis is crucial for improving patient survival.However,due to the lack of obvious early symptoms of colon cancer,many patients are in the middle to late stage when diagnosed and miss the best time for treatment.Therefore,developing an efficient and accurate diagnostic method for colon cancer is of great clinical significance and scientific value.Currently,the current colon cancer biomarkers carcinoembryonic antigen and carbohydrate antigen 19-9[2]have low sensitivity and specificity,the emerging markers circulating tumor DNA(ctDNA)and miRNA face high cost and standardization challenges,and the existing methods lack spatial resolution,prompting the incorporation of spatial metabolomics technologies to enhance diagnostic capabilities.展开更多
Geological analysis,despite being a long-term method for identifying adverse geology in tunnels,has significant limitations due to its reliance on empirical analysis.The quantitative aspects of geochemical anomalies a...Geological analysis,despite being a long-term method for identifying adverse geology in tunnels,has significant limitations due to its reliance on empirical analysis.The quantitative aspects of geochemical anomalies associated with adverse geology provide a novel strategy for addressing these limitations.However,statistical methods for identifying geochemical anomalies are insufficient for tunnel engineering.In contrast,data mining techniques such as machine learning have demonstrated greater efficacy when applied to geological data.Herein,a method for identifying adverse geology using machine learning of geochemical anomalies is proposed.The method was identified geochemical anomalies in tunnel that were not identified by statistical methods.We by employing robust factor analysis and self-organizing maps to reduce the dimensionality of geochemical data and extract the anomaly elements combination(AEC).Using the AEC sample data,we trained an isolation forest model to identify the multi-element anomalies,successfully.We analyzed the adverse geological features based the multi-element anomalies.This study,therefore,extends the traditional approach of geological analysis in tunnels and demonstrates that machine learning is an effective tool for intelligent geological analysis.Correspondingly,the research offers new insights regarding the adverse geology and the prevention of hazards during the construction of tunnels and underground engineering projects.展开更多
BACKGROUND Patients with early-stage hepatocellular carcinoma(HCC)generally have good survival rates following surgical resection.However,a subset of these patients experience recurrence within five years post-surgery...BACKGROUND Patients with early-stage hepatocellular carcinoma(HCC)generally have good survival rates following surgical resection.However,a subset of these patients experience recurrence within five years post-surgery.AIM To develop predictive models utilizing machine learning(ML)methods to detect early-stage patients at a high risk of mortality.METHODS Eight hundred and eight patients with HCC at Beijing Ditan Hospital were randomly allocated to training and validation cohorts in a 2:1 ratio.Prognostic models were generated using random survival forests and artificial neural networks(ANNs).These ML models were compared with other classic HCC scoring systems.A decision-tree model was established to validate the contri-bution of immune-inflammatory indicators to the long-term outlook of patients with early-stage HCC.RESULTS Immune-inflammatory markers,albumin-bilirubin scores,alpha-fetoprotein,tumor size,and International Normalized Ratio were closely associated with the 5-year survival rates.Among various predictive models,the ANN model gene-rated using these indicators through ML algorithms exhibited superior perfor-mance,with a 5-year area under the curve(AUC)of 0.85(95%CI:0.82-0.88).In the validation cohort,the 5-year AUC was 0.82(95%CI:0.74-0.85).According to the ANN model,patients were classified into high-risk and low-risk groups,with an overall survival hazard ratio of 7.98(95%CI:5.85-10.93,P<0.0001)between the two cohorts.INTRODUCTION Hepatocellular carcinoma(HCC)is one of the six most prevalent cancers[1]and the third leading cause of cancer-related mortality[2].China has some of the highest incidence and mortality rates for liver cancer,accounting for half of global cases[3,4].The Barcelona Clinic Liver Cancer(BCLC)Staging System is the most widely used framework for diagnosing and treating HCC[5].The optimal candidates for surgical treatment are those with early-stage HCC,classified as BCLC stage 0 or A.Patients with early-stage liver cancer typically have a better prognosis after surgical resection,achieving a 5-year survival rate of 60%-70%[6].However,the high postoperative recurrence rates of HCC remain a major obstacle to long-term efficacy.To improve the prognosis of patients with early-stage HCC,it is necessary to develop models that can identify those with poor prognoses,enabling stratified and personalized treatment and follow-up strategies.Chronic inflammation is linked to the development and advancement of tumors[7].Recently,peripheral blood immune indicators,such as neutrophil-to-lymphocyte ratio(NLR),platelet-to-lymphocyte ratio(PLR),and lymphocyte-to-monocyte ratio(LMR),have garnered extensive attention and have been used to predict survival in various tumors and inflammation-related diseases[8-10].However,the relationship between these combinations of immune markers and the outcomes in patients with early-stage HCC require further investigation.Machine learning(ML)algorithms are capable of handling large and complex datasets,generating more accurate and personalized predictions through unique training algorithms that better manage nonlinear statistical relationships than traditional analytical methods.Commonly used ML models include artificial neural networks(ANNs)and random survival forests(RSFs),which have shown satisfactory accuracy in prognostic predictions across various cancers and other diseases[11-13].ANNs have performed well in identifying the progression from liver cirrhosis to HCC and predicting overall survival(OS)in patients with HCC[14,15].However,no studies have confirmed the ability of ML models to predict post-surgical survival in patients with early-stage HCC.Through ML,a better understanding of the risk factors for early-stage HCC prognosis can be achieved.This aids in surgical decision-making,identifying patients at a high risk of mortality,and selecting subsequent treatment strategies.In this study,we aimed to establish a 5-year prognostic model for patients with early-stage HCC after surgical resection,based on ML and systemic immune-inflammatory indicators.This model seeks to improve the early monitoring of high-risk patients and provide personalized treatment plans.展开更多
Missing values in radionuclide diffusion datasets can undermine the predictive accuracy and robustness of the machine learning(ML)models.In this study,regression-based missing data imputation method using a light grad...Missing values in radionuclide diffusion datasets can undermine the predictive accuracy and robustness of the machine learning(ML)models.In this study,regression-based missing data imputation method using a light gradient boosting machine(LGBM)algorithm was employed to impute more than 60%of the missing data,establishing a radionuclide diffusion dataset containing 16 input features and 813 instances.The effective diffusion coefficient(D_(e))was predicted using ten ML models.The predictive accuracy of the ensemble meta-models,namely LGBM-extreme gradient boosting(XGB)and LGBM-categorical boosting(CatB),surpassed that of the other ML models,with R^(2)values of 0.94.The models were applied to predict the D_(e)values of EuEDTA^(−)and HCrO_(4)^(−)in saturated compacted bentonites at compactions ranging from 1200 to 1800 kg/m^(3),which were measured using a through-diffusion method.The generalization ability of the LGBM-XGB model surpassed that of LGB-CatB in predicting the D_(e)of HCrO_(4)^(−).Shapley additive explanations identified total porosity as the most significant influencing factor.Additionally,the partial dependence plot analysis technique yielded clearer results in the univariate correlation analysis.This study provides a regression imputation technique to refine radionuclide diffusion datasets,offering deeper insights into analyzing the diffusion mechanism of radionuclides and supporting the safety assessment of the geological disposal of high-level radioactive waste.展开更多
文摘Post-kidney transplant rejection is a critical factor influencing transplant success rates and the survival of transplanted organs.With the rapid advancement of artificial intelligence technologies,machine learning(ML)has emerged as a powerful data analysis tool,widely applied in the prediction,diagnosis,and mechanistic study of kidney transplant rejection.This mini-review systematically summarizes the recent applications of ML techniques in post-kidney transplant rejection,covering areas such as the construction of predictive models,identification of biomarkers,analysis of pathological images,assessment of immune cell infiltration,and formulation of personalized treatment strategies.By integrating multi-omics data and clinical information,ML has significantly enhanced the accuracy of early rejection diagnosis and the capability for prognostic evaluation,driving the development of precision medicine in the field of kidney transplantation.Furthermore,this article discusses the challenges faced in existing research and potential future directions,providing a theoretical basis and technical references for related studies.
文摘Delayed wound healing following radical gastrectomy remains an important yet underappreciated complication that prolongs hospitalization,increases costs,and undermines patient recovery.In An et al’s recent study,the authors present a machine learning-based risk prediction approach using routinely available clinical and laboratory parameters.Among the evaluated algorithms,a decision tree model demonstrated excellent discrimination,achieving an area under the curve of 0.951 in the validation set and notably identifying all true cases of delayed wound healing at the Youden index threshold.The inclusion of variables such as drainage duration,preoperative white blood cell and neutrophil counts,alongside age and sex,highlights the pragmatic appeal of the model for early postoperative monitoring.Nevertheless,several aspects warrant critical reflection,including the reliance on a postoperative variable(drainage duration),internal validation only,and certain reporting inconsistencies.This letter underscores both the promise and the limitations of adopting interpretable machine learning models in perioperative care.We advocate for transparent reporting,external validation,and careful consideration of clinically actionable timepoints before integration into practice.Ultimately,this work represents a valuable step toward precision risk stratification in gastric cancer surgery,and sets the stage for multicenter,prospective evaluations.
文摘Gastrointestinal(GI)cancers remain a leading cause of cancer-related morbidity and mortality worldwide.Artificial intelligence(AI),particularly machine learning and deep learning(DL),has shown promise in enhancing cancer detection,diagnosis,and prognostication.A narrative review of literature published from January 2015 to march 2025 was conducted using PubMed,Web of Science,and Scopus.Search terms included"gastrointestinal cancer","artificial intelligence","machine learning","deep learning","radiomics","multimodal detection"and"predictive modeling".Studies were included if they focused on clinically relevant AI applications in GI oncology.AI algorithms for GI cancer detection have achieved high performance across imaging modalities,with endoscopic DL systems reporting accuracies of 85%-97%for polyp detection and segmentation.Radiomics-based models have predicted molecular biomarkers such as programmed cell death ligand 2 expression with area under the curves up to 0.92.Large language models applied to radiology reports demonstrated diagnostic accuracy comparable to junior radiologists(78.9%vs 80.0%),though without incremental value when combined with human interpretation.Multimodal AI approaches integrating imaging,pathology,and clinical data show emerging potential for precision oncology.AI in GI oncology has reached clinically relevant accuracy levels in multiple diagnostic tasks,with multimodal approaches and predictive biomarker modeling offering new opportunities for personalized care.However,broader validation,integration into clinical workflows,and attention to ethical,legal,and social implications remain critical for widespread adoption.
基金financial support of the National Natural Science Foundation of China(No.52371103)the Fundamental Research Funds for the Central Universities,China(No.2242023K40028)+1 种基金the Open Research Fund of Jiangsu Key Laboratory for Advanced Metallic Materials,China(No.AMM2023B01).financial support of the Research Fund of Shihezi Key Laboratory of AluminumBased Advanced Materials,China(No.2023PT02)financial support of Guangdong Province Science and Technology Major Project,China(No.2021B0301030005)。
文摘Oxide dispersion strengthened(ODS)alloys are extensively used owing to high thermostability and creep strength contributed from uniformly dispersed fine oxides particles.However,the existence of these strengthening particles also deteriorates the processability and it is of great importance to establish accurate processing maps to guide the thermomechanical processes to enhance the formability.In this study,we performed particle swarm optimization-based back propagation artificial neural network model to predict the high temperature flow behavior of 0.25wt%Al2O3 particle-reinforced Cu alloys,and compared the accuracy with that of derived by Arrhenius-type constitutive model and back propagation artificial neural network model.To train these models,we obtained the raw data by fabricating ODS Cu alloys using the internal oxidation and reduction method,and conducting systematic hot compression tests between 400 and800℃with strain rates of 10^(-2)-10 S^(-1).At last,processing maps for ODS Cu alloys were proposed by combining processing parameters,mechanical behavior,microstructure characterization,and the modeling results achieved a coefficient of determination higher than>99%.
基金supported by the National Key Research and Development Program of China(Grant No.2022YFA1402304)the National Natural Science Foundation of China(Grant Nos.12034009,12374005,52288102,52090024,and T2225013)+1 种基金the Fundamental Research Funds for the Central Universitiesthe Program for JLU Science and Technology Innovative Research Team.
文摘Crystal structure prediction(CSP)is a foundational computational technique for determining the atomic arrangements of crystalline materials,especially under high-pressure conditions.While CSP plays a critical role in materials science,traditional approaches often encounter significant challenges related to computational efficiency and scalability,particularly when applied to complex systems.Recent advances in machine learning(ML)have shown tremendous promise in addressing these limitations,enabling the rapid and accurate prediction of crystal structures across a wide range of chemical compositions and external conditions.This review provides a concise overview of recent progress in ML-assisted CSP methodologies,with a particular focus on machine learning potentials and generative models.By critically analyzing these advances,we highlight the transformative impact of ML in accelerating materials discovery,enhancing computational efficiency,and broadening the applicability of CSP.Additionally,we discuss emerging opportunities and challenges in this rapidly evolving field.
基金supported by JSPS KAKENHI Grant Number JP21K14064 and JP23K13239.
文摘Maintaining high groundwater level(GWL)is important for preventing fires in peatlands.This study proposes GWL prediction using machine learning methods for forest plantations in Indonesian tropical peatlands.Deep neural networks(DNN)have been used for prediction;however,they have not been applied to groundwater prediction in Indonesian peatlands.Tropical peatland is characterized by high permeability and forest plantations are surrounded by several canals.By predicting daily differences in GWL,the GWL can be predicted with high accuracy.DNNs,random forests,support vector regression,and XGBoost were compared,all of which indicated similar errors.The SHAP value revealed that the precipitation falling on the hill rapidly seeps into the soil and flows into the canals,which agrees with the fact that the soil has high permeability.These findings can potentially be used to alleviate and manage future fires in peatlands.
文摘The rapid growth of machine learning(ML)across fields has intensified the challenge of selecting the right algorithm for specific tasks,known as the Algorithm Selection Problem(ASP).Traditional trial-and-error methods have become impractical due to their resource demands.Automated Machine Learning(AutoML)systems automate this process,but often neglect the group structures and sparsity in meta-features,leading to inefficiencies in algorithm recommendations for classification tasks.This paper proposes a meta-learning approach using Multivariate Sparse Group Lasso(MSGL)to address these limitations.Our method models both within-group and across-group sparsity among meta-features to manage high-dimensional data and reduce multicollinearity across eight meta-feature groups.The Fast Iterative Shrinkage-Thresholding Algorithm(FISTA)with adaptive restart efficiently solves the non-smooth optimization problem.Empirical validation on 145 classification datasets with 17 classification algorithms shows that our meta-learning method outperforms four state-of-the-art approaches,achieving 77.18%classification accuracy,86.07%recommendation accuracy and 88.83%normalized discounted cumulative gain.
基金supported by the Jiangsu Provincial Science and Technology Project Basic Research Program(Natural Science Foundation of Jiangsu Province)(No.BK20211283).
文摘NJmat is a user-friendly,data-driven machine learning interface designed for materials design and analysis.The platform integrates advanced computational techniques,including natural language processing(NLP),large language models(LLM),machine learning potentials(MLP),and graph neural networks(GNN),to facili-tate materials discovery.The platform has been applied in diverse materials research areas,including perovskite surface design,catalyst discovery,battery materials screening,structural alloy design,and molecular informatics.By automating feature selection,predictive modeling,and result interpretation,NJmat accelerates the development of high-performance materials across energy storage,conversion,and structural applications.Additionally,NJmat serves as an educational tool,allowing students and researchers to apply machine learning techniques in materials science with minimal coding expertise.Through automated feature extraction,genetic algorithms,and interpretable machine learning models,NJmat simplifies the workflow for materials informatics,bridging the gap between AI and experimental materials research.The latest version(available at https://figshare.com/articles/software/NJmatML/24607893(accessed on 01 January 2025))enhances its functionality by incorporating NJmatNLP,a module leveraging language models like MatBERT and those based on Word2Vec to support materials prediction tasks.By utilizing clustering and cosine similarity analysis with UMAP visualization,NJmat enables intuitive exploration of materials datasets.While NJmat primarily focuses on structure-property relationships and the discovery of novel chemistries,it can also assist in optimizing processing conditions when relevant parameters are included in the training data.By providing an accessible,integrated environment for machine learning-driven materials discovery,NJmat aligns with the objectives of the Materials Genome Initiative and promotes broader adoption of AI techniques in materials science.
文摘Tian et al present a timely machine learning(ML)model integrating biochemical and novel traditional Chinese medicine(TCM)indicators(tongue edge redness,greasy coating)to predict hepatic steatosis in high metabolic risk patients.Their prospective cohort design and dual-feature selection(LASSO+RFE)culminating in an interpretable XGBoost model(area under the curve:0.82)represent a significant methodological advance.The inclusion of TCM diagnostics addresses metabolic dysfunction-associated fatty liver disease(MAFLD’s)multisystem heterogeneity-a key strength that bridges holistic medicine with precision analytics and underscores potential cost savings over imaging-dependent screening.However,critical limitations impede clinical translation.First,the model’s singlecenter validation(n=711)lacks external/generalizability testing across diverse populations,risking bias from local demographics.Second,MAFLD subtyping(e.g.,lean MAFLD,diabetic MAFLD)was omitted despite acknowledged disease heterogeneity;this overlooks distinct pathophysiologies and may limit utility in stratified care.Third,while TCM features ranked among the top predictors in SHAP analysis,their clinical interpretability remains nebulous without mechanistic links to metabolic dysregulation.To resolve these gaps,we propose external validation in multiethnic cohorts using the published feature set(e.g.,aspartate aminotransferase/alanine aminotransferase,low-density lipoprotein cholesterol,TCM tongue markers)to assess robustness.Subtype-specific modeling to capture MAFLD heterogeneity,potentially enhancing accuracy in highrisk subgroups.Probing TCM microbiome/metabolomic correlations to ground tongue phenotypes in biological pathways,elevating model credibility.Despite shortcomings,this work pioneers a low-cost screening paradigm.Future iterations addressing these issues could revolutionize early MAFLD detection in resource-limited settings.
基金supported by the National Natural Science Foundation of China(No.22276139)the Shanghai’s Municipal State-owned Assets Supervision and Administration Commission(No.2022028).
文摘To better understand the migration behavior of plastic fragments in the environment,development of rapid non-destructive methods for in-situ identification and characterization of plastic fragments is necessary.However,most of the studies had focused only on colored plastic fragments,ignoring colorless plastic fragments and the effects of different environmental media(backgrounds),thus underestimating their abundance.To address this issue,the present study used near-infrared spectroscopy to compare the identification of colored and colorless plastic fragments based on partial least squares-discriminant analysis(PLS-DA),extreme gradient boost,support vector machine and random forest classifier.The effects of polymer color,type,thickness,and background on the plastic fragments classification were evaluated.PLS-DA presented the best and most stable outcome,with higher robustness and lower misclassification rate.All models frequently misinterpreted colorless plastic fragments and its background when the fragment thickness was less than 0.1mm.A two-stage modeling method,which first distinguishes the plastic types and then identifies colorless plastic fragments that had been misclassified as background,was proposed.The method presented an accuracy higher than 99%in different backgrounds.In summary,this study developed a novel method for rapid and synchronous identification of colored and colorless plastic fragments under complex environmental backgrounds.
基金supported by the National Natural Science Foundation of China(No.U21A20290)Guangdong Basic and Applied Basic Research Foundation(No.2022A1515011656)+2 种基金the Projects of Talents Recruitment of GDUPT(No.2023rcyj1003)the 2022“Sail Plan”Project of Maoming Green Chemical Industry Research Institute(No.MMGCIRI2022YFJH-Y-024)Maoming Science and Technology Project(No.2023382).
文摘The presence of aluminum(Al^(3+))and fluoride(F^(−))ions in the environment can be harmful to ecosystems and human health,highlighting the need for accurate and efficient monitoring.In this paper,an innovative approach is presented that leverages the power of machine learning to enhance the accuracy and efficiency of fluorescence-based detection for sequential quantitative analysis of aluminum(Al^(3+))and fluoride(F^(−))ions in aqueous solutions.The proposed method involves the synthesis of sulfur-functionalized carbon dots(C-dots)as fluorescence probes,with fluorescence enhancement upon interaction with Al^(3+)ions,achieving a detection limit of 4.2 nmol/L.Subsequently,in the presence of F^(−)ions,fluorescence is quenched,with a detection limit of 47.6 nmol/L.The fingerprints of fluorescence images are extracted using a cross-platform computer vision library in Python,followed by data preprocessing.Subsequently,the fingerprint data is subjected to cluster analysis using the K-means model from machine learning,and the average Silhouette Coefficient indicates excellent model performance.Finally,a regression analysis based on the principal component analysis method is employed to achieve more precise quantitative analysis of aluminum and fluoride ions.The results demonstrate that the developed model excels in terms of accuracy and sensitivity.This groundbreaking model not only showcases exceptional performance but also addresses the urgent need for effective environmental monitoring and risk assessment,making it a valuable tool for safeguarding our ecosystems and public health.
基金supported by the Fundamental Research Funds for the Central Universities(Grant No.2682024GF019)。
文摘Excellent detonation performances and low sensitivity are prerequisites for the deployment of energetic materials.Exploring the underlying factors that affect impact sensitivity and detonation performances as well as exploring how to obtain materials with desired properties remains a long-term challenge.Machine learning with its ability to solve complex tasks and perform robust data processing can reveal the relationship between performance and descriptive indicators,potentially accelerating the development process of energetic materials.In this background,impact sensitivity,detonation performances,and 28 physicochemical parameters for 222 energetic materials from density functional theory calculations and published literature were sorted out.Four machine learning algorithms were employed to predict various properties of energetic materials,including impact sensitivity,detonation velocity,detonation pressure,and Gurney energy.Analysis of Pearson coefficients and feature importance showed that the heat of explosion,oxygen balance,decomposition products,and HOMO energy levels have a strong correlation with the impact sensitivity of energetic materials.Oxygen balance,decomposition products,and density have a strong correlation with detonation performances.Utilizing impact sensitivity of 2,3,4-trinitrotoluene and the detonation performances of 2,4,6-trinitrobenzene-1,3,5-triamine as the benchmark,the analysis of feature importance rankings and statistical data revealed the optimal range of key features balancing impact sensitivity and detonation performances:oxygen balance values should be between-40%and-30%,density should range from 1.66 to 1.72 g/cm^(3),HOMO energy levels should be between-6.34 and-6.31 eV,and lipophilicity should be between-1.0 and 0.1,4.49 and 5.59.These findings not only offer important insights into the impact sensitivity and detonation performances of energetic materials,but also provide a theoretical guidance paradigm for the design and development of new energetic materials with optimal detonation performances and reduced sensitivity.
基金supported by the National Key R&D Program of China(No.2022YFE0109500)the National Natural Science Foundation of China(Nos.52071255,52301250,52171190 and 12304027)+2 种基金the Key R&D Project of Shaanxi Province(No.2022GXLH-01-07)the Fundamental Research Funds for the Central Universities(China)the World-Class Universities(Disciplines)and the Characteristic Development Guidance Funds for the Central Universities.
文摘With the rapid development of artificial intelligence,magnetocaloric materials as well as other materials are being developed with increased efficiency and enhanced performance.However,most studies do not take phase transitions into account,and as a result,the predictions are usually not accurate enough.In this context,we have established an explicable relationship between alloy compositions and phase transition by feature imputation.A facile machine learning is proposed to screen candidate NiMn-based Heusler alloys with desired magnetic entropy change and magnetic transition temperature with a high accuracy R^(2)≈0.98.As expected,the measured properties of prepared NiMn-based alloys,including phase transition type,magnetic entropy changes and transition temperature,are all in good agreement with the ML predictions.As well as being the first to demonstrate an explicable relationship between alloy compositions,phase transitions and magnetocaloric properties,our proposed ML model is highly predictive and interpretable,which can provide a strong theoretical foundation for identifying high-performance magnetocaloric materials in the future.
文摘BACKGROUND Colorectal polyps are precancerous diseases of colorectal cancer.Early detection and resection of colorectal polyps can effectively reduce the mortality of colorectal cancer.Endoscopic mucosal resection(EMR)is a common polypectomy proce-dure in clinical practice,but it has a high postoperative recurrence rate.Currently,there is no predictive model for the recurrence of colorectal polyps after EMR.AIM To construct and validate a machine learning(ML)model for predicting the risk of colorectal polyp recurrence one year after EMR.METHODS This study retrospectively collected data from 1694 patients at three medical centers in Xuzhou.Additionally,a total of 166 patients were collected to form a prospective validation set.Feature variable screening was conducted using uni-variate and multivariate logistic regression analyses,and five ML algorithms were used to construct the predictive models.The optimal models were evaluated based on different performance metrics.Decision curve analysis(DCA)and SHapley Additive exPlanation(SHAP)analysis were performed to assess clinical applicability and predictor importance.RESULTS Multivariate logistic regression analysis identified 8 independent risk factors for colorectal polyp recurrence one year after EMR(P<0.05).Among the models,eXtreme Gradient Boosting(XGBoost)demonstrated the highest area under the curve(AUC)in the training set,internal validation set,and prospective validation set,with AUCs of 0.909(95%CI:0.89-0.92),0.921(95%CI:0.90-0.94),and 0.963(95%CI:0.94-0.99),respectively.DCA indicated favorable clinical utility for the XGBoost model.SHAP analysis identified smoking history,family history,and age as the top three most important predictors in the model.CONCLUSION The XGBoost model has the best predictive performance and can assist clinicians in providing individualized colonoscopy follow-up recommendations.
基金funded by theNationalNatural Science Foundation of China(52061020)Major Science and Technology Projects in Yunnan Province(202302AG050009)Yunnan Fundamental Research Projects(202301AV070003).
文摘Finding materials with specific properties is a hot topic in materials science.Traditional materials design relies on empirical and trial-and-error methods,requiring extensive experiments and time,resulting in high costs.With the development of physics,statistics,computer science,and other fields,machine learning offers opportunities for systematically discovering new materials.Especially through machine learning-based inverse design,machine learning algorithms analyze the mapping relationships between materials and their properties to find materials with desired properties.This paper first outlines the basic concepts of materials inverse design and the challenges faced by machine learning-based approaches to materials inverse design.Then,three main inverse design methods—exploration-based,model-based,and optimization-based—are analyzed in the context of different application scenarios.Finally,the applications of inverse design methods in alloys,optical materials,and acoustic materials are elaborated on,and the prospects for materials inverse design are discussed.The authors hope to accelerate the discovery of new materials and provide new possibilities for advancing materials science and innovative design methods.
基金supported by the Key Research and Development Program in Shaanxi Province,China(No.2022ZDLSF07-05)the Fundamental Research Funds for the Central Universities,CHD(No.300102352901)。
文摘Carbon emissions resulting from energy consumption have become a pressing issue for governments worldwide.Accurate estimation of carbon emissions using satellite remote sensing data has become a crucial research problem.Previous studies relied on statistical regression models that failed to capture the complex nonlinear relationships between carbon emissions and characteristic variables.In this study,we propose a machine learning algorithm for carbon emissions,a Bayesian optimized XGboost regression model,using multi-year energy carbon emission data and nighttime lights(NTL)remote sensing data from Shaanxi Province,China.Our results demonstrate that the XGboost algorithm outperforms linear regression and four other machine learning models,with an R^(2)of 0.906 and RMSE of 5.687.We observe an annual increase in carbon emissions,with high-emission counties primarily concentrated in northern and central Shaanxi Province,displaying a shift from discrete,sporadic points to contiguous,extended spatial distribution.Spatial autocorrelation clustering reveals predominantly high-high and low-low clustering patterns,with economically developed counties showing high-emission clustering and economically relatively backward counties displaying low-emission clustering.Our findings show that the use of NTL data and the XGboost algorithm can estimate and predict carbon emissionsmore accurately and provide a complementary reference for satellite remote sensing image data to serve carbon emission monitoring and assessment.This research provides an important theoretical basis for formulating practical carbon emission reduction policies and contributes to the development of techniques for accurate carbon emission estimation using remote sensing data.
文摘Colon cancer is one of the malignant tumors with high morbidity and mortality worldwide[1],and its early diagnosis is crucial for improving patient survival.However,due to the lack of obvious early symptoms of colon cancer,many patients are in the middle to late stage when diagnosed and miss the best time for treatment.Therefore,developing an efficient and accurate diagnostic method for colon cancer is of great clinical significance and scientific value.Currently,the current colon cancer biomarkers carcinoembryonic antigen and carbohydrate antigen 19-9[2]have low sensitivity and specificity,the emerging markers circulating tumor DNA(ctDNA)and miRNA face high cost and standardization challenges,and the existing methods lack spatial resolution,prompting the incorporation of spatial metabolomics technologies to enhance diagnostic capabilities.
基金the support from the National Natural Science Foundation of China(Nos.52279103,52379103)the Natural Science Foundation of Shandong Province(No.ZR2023YQ049)。
文摘Geological analysis,despite being a long-term method for identifying adverse geology in tunnels,has significant limitations due to its reliance on empirical analysis.The quantitative aspects of geochemical anomalies associated with adverse geology provide a novel strategy for addressing these limitations.However,statistical methods for identifying geochemical anomalies are insufficient for tunnel engineering.In contrast,data mining techniques such as machine learning have demonstrated greater efficacy when applied to geological data.Herein,a method for identifying adverse geology using machine learning of geochemical anomalies is proposed.The method was identified geochemical anomalies in tunnel that were not identified by statistical methods.We by employing robust factor analysis and self-organizing maps to reduce the dimensionality of geochemical data and extract the anomaly elements combination(AEC).Using the AEC sample data,we trained an isolation forest model to identify the multi-element anomalies,successfully.We analyzed the adverse geological features based the multi-element anomalies.This study,therefore,extends the traditional approach of geological analysis in tunnels and demonstrates that machine learning is an effective tool for intelligent geological analysis.Correspondingly,the research offers new insights regarding the adverse geology and the prevention of hazards during the construction of tunnels and underground engineering projects.
基金Supported by High-Level Chinese Medicine Key Discipline Construction Project,No.zyyzdxk-2023005Capital Health Development Research Project,No.2024-1-2173the National Natural Science Foundation of China,No.82474426 and No.82474419。
文摘BACKGROUND Patients with early-stage hepatocellular carcinoma(HCC)generally have good survival rates following surgical resection.However,a subset of these patients experience recurrence within five years post-surgery.AIM To develop predictive models utilizing machine learning(ML)methods to detect early-stage patients at a high risk of mortality.METHODS Eight hundred and eight patients with HCC at Beijing Ditan Hospital were randomly allocated to training and validation cohorts in a 2:1 ratio.Prognostic models were generated using random survival forests and artificial neural networks(ANNs).These ML models were compared with other classic HCC scoring systems.A decision-tree model was established to validate the contri-bution of immune-inflammatory indicators to the long-term outlook of patients with early-stage HCC.RESULTS Immune-inflammatory markers,albumin-bilirubin scores,alpha-fetoprotein,tumor size,and International Normalized Ratio were closely associated with the 5-year survival rates.Among various predictive models,the ANN model gene-rated using these indicators through ML algorithms exhibited superior perfor-mance,with a 5-year area under the curve(AUC)of 0.85(95%CI:0.82-0.88).In the validation cohort,the 5-year AUC was 0.82(95%CI:0.74-0.85).According to the ANN model,patients were classified into high-risk and low-risk groups,with an overall survival hazard ratio of 7.98(95%CI:5.85-10.93,P<0.0001)between the two cohorts.INTRODUCTION Hepatocellular carcinoma(HCC)is one of the six most prevalent cancers[1]and the third leading cause of cancer-related mortality[2].China has some of the highest incidence and mortality rates for liver cancer,accounting for half of global cases[3,4].The Barcelona Clinic Liver Cancer(BCLC)Staging System is the most widely used framework for diagnosing and treating HCC[5].The optimal candidates for surgical treatment are those with early-stage HCC,classified as BCLC stage 0 or A.Patients with early-stage liver cancer typically have a better prognosis after surgical resection,achieving a 5-year survival rate of 60%-70%[6].However,the high postoperative recurrence rates of HCC remain a major obstacle to long-term efficacy.To improve the prognosis of patients with early-stage HCC,it is necessary to develop models that can identify those with poor prognoses,enabling stratified and personalized treatment and follow-up strategies.Chronic inflammation is linked to the development and advancement of tumors[7].Recently,peripheral blood immune indicators,such as neutrophil-to-lymphocyte ratio(NLR),platelet-to-lymphocyte ratio(PLR),and lymphocyte-to-monocyte ratio(LMR),have garnered extensive attention and have been used to predict survival in various tumors and inflammation-related diseases[8-10].However,the relationship between these combinations of immune markers and the outcomes in patients with early-stage HCC require further investigation.Machine learning(ML)algorithms are capable of handling large and complex datasets,generating more accurate and personalized predictions through unique training algorithms that better manage nonlinear statistical relationships than traditional analytical methods.Commonly used ML models include artificial neural networks(ANNs)and random survival forests(RSFs),which have shown satisfactory accuracy in prognostic predictions across various cancers and other diseases[11-13].ANNs have performed well in identifying the progression from liver cirrhosis to HCC and predicting overall survival(OS)in patients with HCC[14,15].However,no studies have confirmed the ability of ML models to predict post-surgical survival in patients with early-stage HCC.Through ML,a better understanding of the risk factors for early-stage HCC prognosis can be achieved.This aids in surgical decision-making,identifying patients at a high risk of mortality,and selecting subsequent treatment strategies.In this study,we aimed to establish a 5-year prognostic model for patients with early-stage HCC after surgical resection,based on ML and systemic immune-inflammatory indicators.This model seeks to improve the early monitoring of high-risk patients and provide personalized treatment plans.
基金supported by the National Natural Science Foundation of China(No.12475340 and 12375350)Special Branch project of South Taihu Lakethe Scientific Research Fund of Zhejiang Provincial Education Department(No.Y202456326).
文摘Missing values in radionuclide diffusion datasets can undermine the predictive accuracy and robustness of the machine learning(ML)models.In this study,regression-based missing data imputation method using a light gradient boosting machine(LGBM)algorithm was employed to impute more than 60%of the missing data,establishing a radionuclide diffusion dataset containing 16 input features and 813 instances.The effective diffusion coefficient(D_(e))was predicted using ten ML models.The predictive accuracy of the ensemble meta-models,namely LGBM-extreme gradient boosting(XGB)and LGBM-categorical boosting(CatB),surpassed that of the other ML models,with R^(2)values of 0.94.The models were applied to predict the D_(e)values of EuEDTA^(−)and HCrO_(4)^(−)in saturated compacted bentonites at compactions ranging from 1200 to 1800 kg/m^(3),which were measured using a through-diffusion method.The generalization ability of the LGBM-XGB model surpassed that of LGB-CatB in predicting the D_(e)of HCrO_(4)^(−).Shapley additive explanations identified total porosity as the most significant influencing factor.Additionally,the partial dependence plot analysis technique yielded clearer results in the univariate correlation analysis.This study provides a regression imputation technique to refine radionuclide diffusion datasets,offering deeper insights into analyzing the diffusion mechanism of radionuclides and supporting the safety assessment of the geological disposal of high-level radioactive waste.