Oxide dispersion strengthened(ODS)alloys are extensively used owing to high thermostability and creep strength contributed from uniformly dispersed fine oxides particles.However,the existence of these strengthening pa...Oxide dispersion strengthened(ODS)alloys are extensively used owing to high thermostability and creep strength contributed from uniformly dispersed fine oxides particles.However,the existence of these strengthening particles also deteriorates the processability and it is of great importance to establish accurate processing maps to guide the thermomechanical processes to enhance the formability.In this study,we performed particle swarm optimization-based back propagation artificial neural network model to predict the high temperature flow behavior of 0.25wt%Al2O3 particle-reinforced Cu alloys,and compared the accuracy with that of derived by Arrhenius-type constitutive model and back propagation artificial neural network model.To train these models,we obtained the raw data by fabricating ODS Cu alloys using the internal oxidation and reduction method,and conducting systematic hot compression tests between 400 and800℃with strain rates of 10^(-2)-10 S^(-1).At last,processing maps for ODS Cu alloys were proposed by combining processing parameters,mechanical behavior,microstructure characterization,and the modeling results achieved a coefficient of determination higher than>99%.展开更多
Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,su...Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,such as textile,medicine,and automobile industries,it has greater commercial importance.The crop’s performance is greatly influenced by prevailing weather dynamics.As climate changes,assessing how weather changes affect crop performance is essential.Among various techniques that are available,crop models are the most effective and widely used tools for predicting yields.Results This study compares statistical and machine learning models to assess their ability to predict cotton yield across major producing districts of Karnataka,India,utilizing a long-term dataset spanning from 1990 to 2023 that includes yield and weather factors.The artificial neural networks(ANNs)performed superiorly with acceptable yield deviations ranging within±10%during both vegetative stage(F1)and mid stage(F2)for cotton.The model evaluation metrics such as root mean square error(RMSE),normalized root mean square error(nRMSE),and modelling efficiency(EF)were also within the acceptance limits in most districts.Furthermore,the tested ANN model was used to assess the importance of the dominant weather factors influencing crop yield in each district.Specifically,the use of morning relative humidity as an individual parameter and its interaction with maximum and minimum tempera-ture had a major influence on cotton yield in most of the yield predicted districts.These differences highlighted the differential interactions of weather factors in each district for cotton yield formation,highlighting individual response of each weather factor under different soils and management conditions over the major cotton growing districts of Karnataka.Conclusions Compared with statistical models,machine learning models such as ANNs proved higher efficiency in forecasting the cotton yield due to their ability to consider the interactive effects of weather factors on yield forma-tion at different growth stages.This highlights the best suitability of ANNs for yield forecasting in rainfed conditions and for the study on relative impacts of weather factors on yield.Thus,the study aims to provide valuable insights to support stakeholders in planning effective crop management strategies and formulating relevant policies.展开更多
Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,...Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,accurate forecasting of Es layers is crucial for ensuring the precision and dependability of navigation satellite systems.In this study,we present Es predictions made by an empirical model and by a deep learning model,and analyze their differences comprehensively by comparing the model predictions to satellite RO measurements and ground-based ionosonde observations.The deep learning model exhibited significantly better performance,as indicated by its high coefficient of correlation(r=0.87)with RO observations and predictions,than did the empirical model(r=0.53).This study highlights the importance of integrating artificial intelligence technology into ionosphere modelling generally,and into predicting Es layer occurrences and characteristics,in particular.展开更多
Fundamental physics often confronts complex symbolic problems with few guiding exemplars or established principles.While artificial intelligence(AI)offers promise,its typical need for vast datasets to learn from hinde...Fundamental physics often confronts complex symbolic problems with few guiding exemplars or established principles.While artificial intelligence(AI)offers promise,its typical need for vast datasets to learn from hinders its use in these information-scarce frontiers.We introduce learning at criticality(LaC),a reinforcement learning scheme that tunes large language models(LLMs)to a sharp learning transition,addressing this information scarcity.At this transition,LLMs achieve peak generalization from minimal data,exemplified by 7-digit base-7 addition-a test of nontrivial arithmetic reasoning.To elucidate this peak,we analyze a minimal concept-network model designed to capture the essence of how LLMs might link tokens.Trained on a single exemplar,this model also undergoes a sharp learning transition.This transition exhibits hallmarks of a second-order phase transition,notably power-law distributed solution path lengths.At this critical point,the system maximizes a“critical thinking pattern”crucial for generalization,enabled by the underlying scale-free exploration.This suggests LLMs reach peak performance by operating at criticality,where such explorative dynamics enable the extraction of underlying operational rules.We demonstrate LaC in quantum field theory:an 8B-parameter LLM,tuned to its critical point by LaC using a few exemplars of symbolic Matsubara sums,solves unseen,higher-order problems,significantly outperforming far larger models.LaC thus leverages critical phenomena,a physical principle,to empower AI for complex,data-sparse challenges in fundamental physics.展开更多
Assessing the stability of slopes is one of the crucial tasks of geotechnical engineering for assessing and managing risks related to natural hazards,directly affecting safety and sustainable development.This study pr...Assessing the stability of slopes is one of the crucial tasks of geotechnical engineering for assessing and managing risks related to natural hazards,directly affecting safety and sustainable development.This study primarily focuses on developing robust and practical hybrid models to predict the slope stability status of circular failure mode.For this purpose,three robust models were developed using a database including 627 case histories of slope stability status.The models were developed using the random forest(RF),support vector machine(SVM),and extreme gradient boosting(XGB)techniques,employing 5-fold cross validation approach.To enhance the performance of models,this study employs Bayesian optimizer(BO)to fine-tuning their hyperparameters.The results indicate that the performance order of the three developed models is RF-BO>SVM-BO>XGB-BO.Furthermore,comparing the developed models with previous models,it was found that the RF-BO model can effectively determine the slope stability status with outstanding performance.This implies that the RF-BO model could serve as a dependable tool for project managers,assisting in the evaluation of slope stability during both the design and operational phases of projects,despite the inherent challenges in this domain.The results regarding the importance of influencing parameters indicate that cohesion,friction angle,and slope height exert the most significant impact on slope stability status.This suggests that concentrating on these parameters and employing the RF-BO model can effectively mitigate the severity of geohazards in the short-term and contribute to the attainment of long-term sustainable development objectives.展开更多
Federated Learning enables privacy-preserving training of Transformer-based language models,but remains vulnerable to backdoor attacks that compromise model reliability.This paper presents a comparative analysis of de...Federated Learning enables privacy-preserving training of Transformer-based language models,but remains vulnerable to backdoor attacks that compromise model reliability.This paper presents a comparative analysis of defense strategies against both classical and advanced backdoor attacks,evaluated across autoencoding and autoregressive models.Unlike prior studies,this work provides the first systematic comparison of perturbation-based,screening-based,and hybrid defenses in Transformer-based FL environments.Our results show that screening-based defenses consistently outperform perturbation-based ones,effectively neutralizing most attacks across architectures.However,this robustness comes with significant computational overhead,revealing a clear trade-off between security and efficiency.By explicitly identifying this trade-off,our study advances the understanding of defense strategies in federated learning and highlights the need for lightweight yet effective screening methods for trustworthy deployment in diverse application domains.展开更多
Accurate retrieval of casting 3D models is crucial for process reuse.Current methods primarily focus on shape similarity,neglecting process design features,which compromises reusability.In this study,a novel deep lear...Accurate retrieval of casting 3D models is crucial for process reuse.Current methods primarily focus on shape similarity,neglecting process design features,which compromises reusability.In this study,a novel deep learning retrieval method for process reuse was proposed,which integrates process design features into the retrieval of casting 3D models.This method leverages the comparative language-image pretraining(CLIP)model to extract shape features from the three views and sectional views of the casting model and combines them with process design features such as modulus,main wall thickness,symmetry,and length-to-height ratio to enhance process reusability.A database of 230 production casting models was established for model validation.Results indicate that incorporating process design features improves model accuracy by 6.09%,reaching 97.82%,and increases process similarity by 30.25%.The reusability of the process was further verified using the casting simulation software EasyCast.The results show that the process retrieved after integrating process design features produces the least shrinkage in the target model,demonstrating this method’s superior ability for process reuse.This approach does not require a large dataset for training and optimization,making it highly applicable to casting process design and related manufacturing processes.展开更多
It is fundamental and useful to investigate how deep learning forecasting models(DLMs)perform compared to operational oceanography forecast systems(OFSs).However,few studies have intercompared their performances using...It is fundamental and useful to investigate how deep learning forecasting models(DLMs)perform compared to operational oceanography forecast systems(OFSs).However,few studies have intercompared their performances using an identical reference.In this study,three physically reasonable DLMs are implemented for the forecasting of the sea surface temperature(SST),sea level anomaly(SLA),and sea surface velocity in the South China Sea.The DLMs are validated against both the testing dataset and the“OceanPredict”Class 4 dataset.Results show that the DLMs'RMSEs against the latter increase by 44%,245%,302%,and 109%for SST,SLA,current speed,and direction,respectively,compared to those against the former.Therefore,different references have significant influences on the validation,and it is necessary to use an identical and independent reference to intercompare the DLMs and OFSs.Against the Class 4 dataset,the DLMs present significantly better performance for SLA than the OFSs,and slightly better performances for other variables.The error patterns of the DLMs and OFSs show a high degree of similarity,which is reasonable from the viewpoint of predictability,facilitating further applications of the DLMs.For extreme events,the DLMs and OFSs both present large but similar forecast errors for SLA and current speed,while the DLMs are likely to give larger errors for SST and current direction.This study provides an evaluation of the forecast skills of commonly used DLMs and provides an example to objectively intercompare different DLMs.展开更多
Artificial intelligence(AI)is rapidly transforming the landscape of hepatology by enabling automated data interpretation,early disease detection,and individualized treatment strategies.Chronic liver diseases,including...Artificial intelligence(AI)is rapidly transforming the landscape of hepatology by enabling automated data interpretation,early disease detection,and individualized treatment strategies.Chronic liver diseases,including non-alcoholic fatty liver disease,cirrhosis,and hepatocellular carcinoma,often progress silently and pose diagnostic challenges due to reliance on invasive biopsies and operatordependent imaging.This review explores the integration of AI across key domains such as big data analytics,deep learning-based image analysis,histopathological interpretation,biomarker discovery,and clinical prediction modeling.AI algorithms have demonstrated high accuracy in liver fibrosis staging,hepatocellular carcinoma detection,and non-alcoholic fatty liver disease risk stratification,while also enhancing survival prediction and treatment response assessment.For instance,convolutional neural networks trained on portal venous-phase computed tomography have achieved area under the curves up to 0.92 for significant fibrosis(F2-F4)and 0.89 for advanced fibrosis,with magnetic resonance imaging-based models reporting comparable performance.Advanced methodologies such as federated learning preserve patient privacy during cross-center model training,and explainable AI techniques promote transparency and clinician trust.Despite these advancements,clinical adoption remains limited by challenges including data heterogeneity,algorithmic bias,regulatory uncertainty,and lack of real-time integration into electronic health records.Looking forward,the convergence of multi-omics,imaging,and clinical data through interpretable and validated AI frameworks holds great promise for precision liver care.Continued efforts in model standardization,ethical oversight,and clinician-centered deployment will be essential to realize the full potential of AI in hepatopathy diagnosis and treatment.展开更多
This paper investigates the capabilities of large language models(LLMs)to leverage,learn and create knowledge in solving computational fluid dynamics(CFD)problems through three categories of baseline problems.These ca...This paper investigates the capabilities of large language models(LLMs)to leverage,learn and create knowledge in solving computational fluid dynamics(CFD)problems through three categories of baseline problems.These categories include(1)conventional CFD problems that can be solved using existing numerical methods in LLMs,such as lid-driven cavity flow and the Sod shock tube problem;(2)problems that require new numerical methods beyond those available in LLMs,such as the recently developed Chien-physics-informed neural networks for singularly perturbed convection-diffusion equations;and(3)problems that cannot be solved using existing numerical methods in LLMs,such as the ill-conditioned Hilbert linear algebraic systems.The evaluations indicate that reasoning LLMs overall outperform non-reasoning models in four test cases.Reasoning LLMs show excellent performance for CFD problems according to the tailored prompts,but their current capability in autonomous knowledge exploration and creation needs to be enhanced.展开更多
Thermophilic proteins maintain their structure and function at high temperatures,making them widely useful in industrial applications.Due to the complexity of experimental measurements,predicting the melting temperatu...Thermophilic proteins maintain their structure and function at high temperatures,making them widely useful in industrial applications.Due to the complexity of experimental measurements,predicting the melting temperature(T_(m))of proteins has become a research hotspot.Previous methods rely on amino acid composition,physicochemical properties of proteins,and the optimal growth temperature(OGT)of hosts for T_(m)prediction.However,their performance in predicting T_(m)values for thermophilic proteins(T_(m)>60℃)are generally unsatisfactory due to data scarcity.Herein,we introduce T_(m)Pred,a T_(m)prediction model for thermophilic proteins,that combines protein language model,graph convolutional network and Graphormer module.For performance evaluation,T_(m)Pred achieves a root mean square error(RMSE)of 5.48℃,a pearson correlation coefficient(P)of 0.784,and a coefficient of determination(R~2)of 0.613,representing improvements of 19%,15%,and 32%,respectively,compared to the state-of-the-art predictive models like DeepTM.Furthermore,T_(m)Pred demonstrated strong generalization capability on independent blind test datasets.Overall,T_(m)Pred provides an effective tool for the mining and modification of thermophilic proteins by leveraging deep learning.展开更多
The nonlinearity of hedonic datasets demands flexible automated valuation models to appraise housing prices accurately,and artificial intelligence models have been employed in mass appraisal to this end.However,they h...The nonlinearity of hedonic datasets demands flexible automated valuation models to appraise housing prices accurately,and artificial intelligence models have been employed in mass appraisal to this end.However,they have been referred to as“blackbox”models owing to difficulties associated with interpretation.In this study,we compared the results of traditional hedonic pricing models with those of machine learning algorithms,e.g.,random forest and deep neural network models.Commonly implemented measures,e.g.,Gini importance and permutation importance,provide only the magnitude of each explanatory variable’s importance,which results in ambiguous interpretability.To address this issue,we employed the SHapley Additive exPlanation(SHAP)method and explored its effectiveness through comparisons with traditionally explainable measures in hedonic pricing models.The results demonstrated that(1)the random forest model with the SHAP method could be a reliable instrument for appraising housing prices with high accuracy and sufficient interpretability,(2)the interpretable results retrieved from the SHAP method can be consolidated by the support of statistical evidence,and(3)housing characteristics and local amenities are primary contributors in property valuation,which is consistent with the findings of previous studies.Thus,our novel methodological framework and robust findings provide informative insights into the use of machine learning methods in property valuation based on the comparative analysis.展开更多
Floods and storm surges pose significant threats to coastal regions worldwide,demanding timely and accurate early warning systems(EWS)for disaster preparedness.Traditional numerical and statistical methods often fall ...Floods and storm surges pose significant threats to coastal regions worldwide,demanding timely and accurate early warning systems(EWS)for disaster preparedness.Traditional numerical and statistical methods often fall short in capturing complex,nonlinear,and real-time environmental dynamics.In recent years,machine learning(ML)and deep learning(DL)techniques have emerged as promising alternatives for enhancing the accuracy,speed,and scalability of EWS.This review critically evaluates the evolution of ML models—such as Artificial Neural Networks(ANN),Convolutional Neural Networks(CNN),and Long Short-Term Memory(LSTM)—in coastal flood prediction,highlighting their architectures,data requirements,performance metrics,and implementation challenges.A unique contribution of this work is the synthesis of real-time deployment challenges including latency,edge-cloud tradeoffs,and policy-level integration,areas often overlooked in prior literature.Furthermore,the review presents a comparative framework of model performance across different geographic and hydrologic settings,offering actionable insights for researchers and practitioners.Limitations of current AI-driven models,such as interpretability,data scarcity,and generalization across regions,are discussed in detail.Finally,the paper outlines future research directions including hybrid modelling,transfer learning,explainable AI,and policy-aware alert systems.By bridging technical performance and operational feasibility,this review aims to guide the development of next-generation intelligent EWS for resilient and adaptive coastal management.展开更多
This study introduces a novel approach to addressing the challenges of high-dimensional variables and strong nonlinearity in reservoir production and layer configuration optimization.For the first time,relational mach...This study introduces a novel approach to addressing the challenges of high-dimensional variables and strong nonlinearity in reservoir production and layer configuration optimization.For the first time,relational machine learning models are applied in reservoir development optimization.Traditional regression-based models often struggle in complex scenarios,but the proposed relational and regression-based composite differential evolution(RRCODE)method combines a Gaussian naive Bayes relational model with a radial basis function network regression model.This integration effectively captures complex relationships in the optimization process,improving both accuracy and convergence speed.Experimental tests on a multi-layer multi-channel reservoir model,the Egg reservoir model,and a real-field reservoir model(the S reservoir)demonstrate that RRCODE significantly reduces water injection and production volumes while increasing economic returns and cumulative oil recovery.Moreover,the surrogate models employed in RRCODE exhibit lightweight characteristics with low computational overhead.These results highlight RRCODE's superior performance in the integrated optimization of reservoir production and layer configurations,offering more efficient and economically viable solutions for oilfield development.展开更多
Drug repurposing offers a promising alternative to traditional drug development and significantly re-duces costs and timelines by identifying new therapeutic uses for existing drugs.However,the current approaches ofte...Drug repurposing offers a promising alternative to traditional drug development and significantly re-duces costs and timelines by identifying new therapeutic uses for existing drugs.However,the current approaches often rely on limited data sources and simplistic hypotheses,which restrict their ability to capture the multi-faceted nature of biological systems.This study introduces adaptive multi-view learning(AMVL),a novel methodology that integrates chemical-induced transcriptional profiles(CTPs),knowledge graph(KG)embeddings,and large language model(LLM)representations,to enhance drug repurposing predictions.AMVL incorporates an innovative similarity matrix expansion strategy and leverages multi-view learning(MVL),matrix factorization,and ensemble optimization techniques to integrate heterogeneous multi-source data.Comprehensive evaluations on benchmark datasets(Fdata-set,Cdataset,and Ydataset)and the large-scale iDrug dataset demonstrate that AMVL outperforms state-of-the-art(SOTA)methods,achieving superior accuracy in predicting drug-disease associations across multiple metrics.Literature-based validation further confirmed the model's predictive capabilities,with seven out of the top ten predictions corroborated by post-2011 evidence.To promote transparency and reproducibility,all data and codes used in this study were open-sourced,providing resources for pro-cessing CTPs,KG,and LLM-based similarity calculations,along with the complete AMVL algorithm and benchmarking procedures.By unifying diverse data modalities,AMVL offers a robust and scalable so-lution for accelerating drug discovery,fostering advancements in translational medicine and integrating multi-omics data.We aim to inspire further innovations in multi-source data integration and support the development of more precise and efficient strategies for advancing drug discovery and translational medicine.展开更多
Understanding spatial heterogeneity in groundwater responses to multiple factors is critical for water resource management in coastal cities.Daily groundwater depth(GWD)data from 43 wells(2018-2022)were collected in t...Understanding spatial heterogeneity in groundwater responses to multiple factors is critical for water resource management in coastal cities.Daily groundwater depth(GWD)data from 43 wells(2018-2022)were collected in three coastal cities in Jiangsu Province,China.Seasonal and Trend decomposition using Loess(STL)together with wavelet analysis and empirical mode decomposition were applied to identify tide-influenced wells while remaining wells were grouped by hierarchical clustering analysis(HCA).Machine learning models were developed to predict GWD,then their response to natural conditions and human activities was assessed by the Shapley Additive exPlanations(SHAP)method.Results showed that eXtreme Gradient Boosting(XGB)was superior to other models in terms of prediction performance and computational efficiency(R^(2)>0.95).GWD in Yancheng and southern Lianyungang were greater than those in Nantong,exhibiting larger fluctuations.Groundwater within 5 km of the coastline was affected by tides,with more pronounced effects in agricultural areas compared to urban areas.Shallow groundwater(3-7 m depth)responded immediately(0-1 day)to rainfall,primarily influenced by farmland and topography(slope and distance from rivers).Rainfall recharge to groundwater peaked at 50%farmland coverage,but this effect was suppressed by high temperatures(>30℃)which intensified as distance from rivers increased,especially in forest and grassland.Deep groundwater(>10 m)showed delayed responses to rainfall(1-4 days)and temperature(10-15 days),with GDP as the primary influence,followed by agricultural irrigation and population density.Farmland helped to maintain stable GWD in low population density regions,while excessive farmland coverage(>90%)led to overexploitation.In the early stages of GDP development,increased industrial and agricultural water demand led to GWD decline,but as GDP levels significantly improved,groundwater consumption pressure gradually eased.This methodological framework is applicable not only to coastal cities in China but also could be extended to coastal regions worldwide.展开更多
Dear Editor,Aiming at the consensus tracking problem of a class of unknown heterogeneous nonlinear multiagent systems(MASs)with input constraints,a novel data-driven iterative learning consensus control(ILCC)protocol ...Dear Editor,Aiming at the consensus tracking problem of a class of unknown heterogeneous nonlinear multiagent systems(MASs)with input constraints,a novel data-driven iterative learning consensus control(ILCC)protocol based on zeroing neural networks(ZNNs)is proposed.First,a dynamic linearization data model(DLDM)is acquired via dynamic linearization technology(DLT).展开更多
The rapid shift to online education has introduced significant challenges to maintaining academic integrity in remote assessments,as traditional proctoring methods fall short in preventing cheating.The increase in che...The rapid shift to online education has introduced significant challenges to maintaining academic integrity in remote assessments,as traditional proctoring methods fall short in preventing cheating.The increase in cheating during online exams highlights the need for efficient,adaptable detection models to uphold academic credibility.This paper presents a comprehensive analysis of various deep learning models for cheating detection in online proctoring systems,evaluating their accuracy,efficiency,and adaptability.We benchmark several advanced architectures,including EfficientNet,MobileNetV2,ResNet variants and more,using two specialized datasets(OEP and OP)tailored for online proctoring contexts.Our findings reveal that EfficientNetB1 and YOLOv5 achieve top performance on the OP dataset,with EfficientNetB1 attaining a peak accuracy of 94.59% and YOLOv5 reaching a mean average precision(mAP@0.5)of 98.3%.For the OEP dataset,ResNet50-CBAM,YOLOv5 and EfficientNetB0 stand out,with ResNet50-CBAMachieving an accuracy of 93.61% and EfficientNetB0 showing robust detection performance with balanced accuracy and computational efficiency.These results underscore the importance of selectingmodels that balance accuracy and efficiency,supporting scalable,effective cheating detection in online assessments.展开更多
BACKGROUND Severe dengue children with critical complications have been attributed to high mortality rates,varying from approximately 1%to over 20%.To date,there is a lack of data on machine-learning-based algorithms ...BACKGROUND Severe dengue children with critical complications have been attributed to high mortality rates,varying from approximately 1%to over 20%.To date,there is a lack of data on machine-learning-based algorithms for predicting the risk of inhospital mortality in children with dengue shock syndrome(DSS).AIM To develop machine-learning models to estimate the risk of death in hospitalized children with DSS.METHODS This single-center retrospective study was conducted at tertiary Children’s Hospital No.2 in Viet Nam,between 2013 and 2022.The primary outcome was the in-hospital mortality rate in children with DSS admitted to the pediatric intensive care unit(PICU).Nine significant features were predetermined for further analysis using machine learning models.An oversampling method was used to enhance the model performance.Supervised models,including logistic regression,Naïve Bayes,Random Forest(RF),K-nearest neighbors,Decision Tree and Extreme Gradient Boosting(XGBoost),were employed to develop predictive models.The Shapley Additive Explanation was used to determine the degree of contribution of the features.RESULTS In total,1278 PICU-admitted children with complete data were included in the analysis.The median patient age was 8.1 years(interquartile range:5.4-10.7).Thirty-nine patients(3%)died.The RF and XGboost models demonstrated the highest performance.The Shapley Addictive Explanations model revealed that the most important predictive features included younger age,female patients,presence of underlying diseases,severe transaminitis,severe bleeding,low platelet counts requiring platelet transfusion,elevated levels of international normalized ratio,blood lactate and serum creatinine,large volume of resuscitation fluid and a high vasoactive inotropic score(>30).CONCLUSION We developed robust machine learning-based models to estimate the risk of death in hospitalized children with DSS.The study findings are applicable to the design of management schemes to enhance survival outcomes of patients with DSS.展开更多
基金financial support of the National Natural Science Foundation of China(No.52371103)the Fundamental Research Funds for the Central Universities,China(No.2242023K40028)+1 种基金the Open Research Fund of Jiangsu Key Laboratory for Advanced Metallic Materials,China(No.AMM2023B01).financial support of the Research Fund of Shihezi Key Laboratory of AluminumBased Advanced Materials,China(No.2023PT02)financial support of Guangdong Province Science and Technology Major Project,China(No.2021B0301030005)。
文摘Oxide dispersion strengthened(ODS)alloys are extensively used owing to high thermostability and creep strength contributed from uniformly dispersed fine oxides particles.However,the existence of these strengthening particles also deteriorates the processability and it is of great importance to establish accurate processing maps to guide the thermomechanical processes to enhance the formability.In this study,we performed particle swarm optimization-based back propagation artificial neural network model to predict the high temperature flow behavior of 0.25wt%Al2O3 particle-reinforced Cu alloys,and compared the accuracy with that of derived by Arrhenius-type constitutive model and back propagation artificial neural network model.To train these models,we obtained the raw data by fabricating ODS Cu alloys using the internal oxidation and reduction method,and conducting systematic hot compression tests between 400 and800℃with strain rates of 10^(-2)-10 S^(-1).At last,processing maps for ODS Cu alloys were proposed by combining processing parameters,mechanical behavior,microstructure characterization,and the modeling results achieved a coefficient of determination higher than>99%.
基金funded through India Meteorological Department,New Delhi,India under the Forecasting Agricultural output using Space,Agrometeorol ogy and Land based observations(FASAL)project and fund number:No.ASC/FASAL/KT-11/01/HQ-2010.
文摘Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,such as textile,medicine,and automobile industries,it has greater commercial importance.The crop’s performance is greatly influenced by prevailing weather dynamics.As climate changes,assessing how weather changes affect crop performance is essential.Among various techniques that are available,crop models are the most effective and widely used tools for predicting yields.Results This study compares statistical and machine learning models to assess their ability to predict cotton yield across major producing districts of Karnataka,India,utilizing a long-term dataset spanning from 1990 to 2023 that includes yield and weather factors.The artificial neural networks(ANNs)performed superiorly with acceptable yield deviations ranging within±10%during both vegetative stage(F1)and mid stage(F2)for cotton.The model evaluation metrics such as root mean square error(RMSE),normalized root mean square error(nRMSE),and modelling efficiency(EF)were also within the acceptance limits in most districts.Furthermore,the tested ANN model was used to assess the importance of the dominant weather factors influencing crop yield in each district.Specifically,the use of morning relative humidity as an individual parameter and its interaction with maximum and minimum tempera-ture had a major influence on cotton yield in most of the yield predicted districts.These differences highlighted the differential interactions of weather factors in each district for cotton yield formation,highlighting individual response of each weather factor under different soils and management conditions over the major cotton growing districts of Karnataka.Conclusions Compared with statistical models,machine learning models such as ANNs proved higher efficiency in forecasting the cotton yield due to their ability to consider the interactive effects of weather factors on yield forma-tion at different growth stages.This highlights the best suitability of ANNs for yield forecasting in rainfed conditions and for the study on relative impacts of weather factors on yield.Thus,the study aims to provide valuable insights to support stakeholders in planning effective crop management strategies and formulating relevant policies.
基金supported by the Project of Stable Support for Youth Team in Basic Research Field,CAS(grant No.YSBR-018)the National Natural Science Foundation of China(grant Nos.42188101,42130204)+4 种基金the B-type Strategic Priority Program of CAS(grant no.XDB41000000)the National Natural Science Foundation of China(NSFC)Distinguished Overseas Young Talents Program,Innovation Program for Quantum Science and Technology(2021ZD0300301)the Open Research Project of Large Research Infrastructures of CAS-“Study on the interaction between low/mid-latitude atmosphere and ionosphere based on the Chinese Meridian Project”.The project was supported also by the National Key Laboratory of Deep Space Exploration(Grant No.NKLDSE2023A002)the Open Fund of Anhui Provincial Key Laboratory of Intelligent Underground Detection(Grant No.APKLIUD23KF01)the China National Space Administration(CNSA)pre-research Project on Civil Aerospace Technologies No.D010305,D010301.
文摘Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,accurate forecasting of Es layers is crucial for ensuring the precision and dependability of navigation satellite systems.In this study,we present Es predictions made by an empirical model and by a deep learning model,and analyze their differences comprehensively by comparing the model predictions to satellite RO measurements and ground-based ionosonde observations.The deep learning model exhibited significantly better performance,as indicated by its high coefficient of correlation(r=0.87)with RO observations and predictions,than did the empirical model(r=0.53).This study highlights the importance of integrating artificial intelligence technology into ionosphere modelling generally,and into predicting Es layer occurrences and characteristics,in particular.
基金supported by the National Key Research and Development Program of China(Grant No.2024YFA1408604 for K.C.and X.C.)the National Natural Science Foundation of China(Grant Nos.12047503,12447103 for K.C.and X.C.,12325501 for P.Z.,and 12275263 for Y.D.and S.H.)+1 种基金the Innovation Program for Quantum Science and Technology(Grant No.2021ZD0301900 for Y.D.and S.H.)the Natural Science Foundation of Fujian Province of China(Grant No.2023J02032 for Y.D.and S.H.)。
文摘Fundamental physics often confronts complex symbolic problems with few guiding exemplars or established principles.While artificial intelligence(AI)offers promise,its typical need for vast datasets to learn from hinders its use in these information-scarce frontiers.We introduce learning at criticality(LaC),a reinforcement learning scheme that tunes large language models(LLMs)to a sharp learning transition,addressing this information scarcity.At this transition,LLMs achieve peak generalization from minimal data,exemplified by 7-digit base-7 addition-a test of nontrivial arithmetic reasoning.To elucidate this peak,we analyze a minimal concept-network model designed to capture the essence of how LLMs might link tokens.Trained on a single exemplar,this model also undergoes a sharp learning transition.This transition exhibits hallmarks of a second-order phase transition,notably power-law distributed solution path lengths.At this critical point,the system maximizes a“critical thinking pattern”crucial for generalization,enabled by the underlying scale-free exploration.This suggests LLMs reach peak performance by operating at criticality,where such explorative dynamics enable the extraction of underlying operational rules.We demonstrate LaC in quantum field theory:an 8B-parameter LLM,tuned to its critical point by LaC using a few exemplars of symbolic Matsubara sums,solves unseen,higher-order problems,significantly outperforming far larger models.LaC thus leverages critical phenomena,a physical principle,to empower AI for complex,data-sparse challenges in fundamental physics.
文摘Assessing the stability of slopes is one of the crucial tasks of geotechnical engineering for assessing and managing risks related to natural hazards,directly affecting safety and sustainable development.This study primarily focuses on developing robust and practical hybrid models to predict the slope stability status of circular failure mode.For this purpose,three robust models were developed using a database including 627 case histories of slope stability status.The models were developed using the random forest(RF),support vector machine(SVM),and extreme gradient boosting(XGB)techniques,employing 5-fold cross validation approach.To enhance the performance of models,this study employs Bayesian optimizer(BO)to fine-tuning their hyperparameters.The results indicate that the performance order of the three developed models is RF-BO>SVM-BO>XGB-BO.Furthermore,comparing the developed models with previous models,it was found that the RF-BO model can effectively determine the slope stability status with outstanding performance.This implies that the RF-BO model could serve as a dependable tool for project managers,assisting in the evaluation of slope stability during both the design and operational phases of projects,despite the inherent challenges in this domain.The results regarding the importance of influencing parameters indicate that cohesion,friction angle,and slope height exert the most significant impact on slope stability status.This suggests that concentrating on these parameters and employing the RF-BO model can effectively mitigate the severity of geohazards in the short-term and contribute to the attainment of long-term sustainable development objectives.
基金supported by a research fund from Chosun University,2024.
文摘Federated Learning enables privacy-preserving training of Transformer-based language models,but remains vulnerable to backdoor attacks that compromise model reliability.This paper presents a comparative analysis of defense strategies against both classical and advanced backdoor attacks,evaluated across autoencoding and autoregressive models.Unlike prior studies,this work provides the first systematic comparison of perturbation-based,screening-based,and hybrid defenses in Transformer-based FL environments.Our results show that screening-based defenses consistently outperform perturbation-based ones,effectively neutralizing most attacks across architectures.However,this robustness comes with significant computational overhead,revealing a clear trade-off between security and efficiency.By explicitly identifying this trade-off,our study advances the understanding of defense strategies in federated learning and highlights the need for lightweight yet effective screening methods for trustworthy deployment in diverse application domains.
基金supported by the National Natural Science Foundation of China(Nos.52074246,52275390,52375394)the National Defense Basic Scientific Research Program of China(No.JCKY2020408B002)the Key R&D Program of Shanxi Province(No.202102050201011).
文摘Accurate retrieval of casting 3D models is crucial for process reuse.Current methods primarily focus on shape similarity,neglecting process design features,which compromises reusability.In this study,a novel deep learning retrieval method for process reuse was proposed,which integrates process design features into the retrieval of casting 3D models.This method leverages the comparative language-image pretraining(CLIP)model to extract shape features from the three views and sectional views of the casting model and combines them with process design features such as modulus,main wall thickness,symmetry,and length-to-height ratio to enhance process reusability.A database of 230 production casting models was established for model validation.Results indicate that incorporating process design features improves model accuracy by 6.09%,reaching 97.82%,and increases process similarity by 30.25%.The reusability of the process was further verified using the casting simulation software EasyCast.The results show that the process retrieved after integrating process design features produces the least shrinkage in the target model,demonstrating this method’s superior ability for process reuse.This approach does not require a large dataset for training and optimization,making it highly applicable to casting process design and related manufacturing processes.
基金supported by the National Natural Science Foundation of China(Grant Nos.42375062 and 42275158)the National Key Scientific and Technological Infrastructure project“Earth System Science Numerical Simulator Facility”(EarthLab)the Natural Science Foundation of Gansu Province(Grant No.22JR5RF1080)。
文摘It is fundamental and useful to investigate how deep learning forecasting models(DLMs)perform compared to operational oceanography forecast systems(OFSs).However,few studies have intercompared their performances using an identical reference.In this study,three physically reasonable DLMs are implemented for the forecasting of the sea surface temperature(SST),sea level anomaly(SLA),and sea surface velocity in the South China Sea.The DLMs are validated against both the testing dataset and the“OceanPredict”Class 4 dataset.Results show that the DLMs'RMSEs against the latter increase by 44%,245%,302%,and 109%for SST,SLA,current speed,and direction,respectively,compared to those against the former.Therefore,different references have significant influences on the validation,and it is necessary to use an identical and independent reference to intercompare the DLMs and OFSs.Against the Class 4 dataset,the DLMs present significantly better performance for SLA than the OFSs,and slightly better performances for other variables.The error patterns of the DLMs and OFSs show a high degree of similarity,which is reasonable from the viewpoint of predictability,facilitating further applications of the DLMs.For extreme events,the DLMs and OFSs both present large but similar forecast errors for SLA and current speed,while the DLMs are likely to give larger errors for SST and current direction.This study provides an evaluation of the forecast skills of commonly used DLMs and provides an example to objectively intercompare different DLMs.
基金Supported by the Science Planning Project of Liaoning Province,No.2019JH2/10300031-05the National Natural Science Foundation of China,No.12171074.
文摘Artificial intelligence(AI)is rapidly transforming the landscape of hepatology by enabling automated data interpretation,early disease detection,and individualized treatment strategies.Chronic liver diseases,including non-alcoholic fatty liver disease,cirrhosis,and hepatocellular carcinoma,often progress silently and pose diagnostic challenges due to reliance on invasive biopsies and operatordependent imaging.This review explores the integration of AI across key domains such as big data analytics,deep learning-based image analysis,histopathological interpretation,biomarker discovery,and clinical prediction modeling.AI algorithms have demonstrated high accuracy in liver fibrosis staging,hepatocellular carcinoma detection,and non-alcoholic fatty liver disease risk stratification,while also enhancing survival prediction and treatment response assessment.For instance,convolutional neural networks trained on portal venous-phase computed tomography have achieved area under the curves up to 0.92 for significant fibrosis(F2-F4)and 0.89 for advanced fibrosis,with magnetic resonance imaging-based models reporting comparable performance.Advanced methodologies such as federated learning preserve patient privacy during cross-center model training,and explainable AI techniques promote transparency and clinician trust.Despite these advancements,clinical adoption remains limited by challenges including data heterogeneity,algorithmic bias,regulatory uncertainty,and lack of real-time integration into electronic health records.Looking forward,the convergence of multi-omics,imaging,and clinical data through interpretable and validated AI frameworks holds great promise for precision liver care.Continued efforts in model standardization,ethical oversight,and clinician-centered deployment will be essential to realize the full potential of AI in hepatopathy diagnosis and treatment.
基金supported by the National Natural Science Foundation of China Basic Science Center Program for“Multiscale Problems in Nonlinear Mechanics”(Grant No.11988102)the National Natural Science Foundation of China(Grant No.12202451).
文摘This paper investigates the capabilities of large language models(LLMs)to leverage,learn and create knowledge in solving computational fluid dynamics(CFD)problems through three categories of baseline problems.These categories include(1)conventional CFD problems that can be solved using existing numerical methods in LLMs,such as lid-driven cavity flow and the Sod shock tube problem;(2)problems that require new numerical methods beyond those available in LLMs,such as the recently developed Chien-physics-informed neural networks for singularly perturbed convection-diffusion equations;and(3)problems that cannot be solved using existing numerical methods in LLMs,such as the ill-conditioned Hilbert linear algebraic systems.The evaluations indicate that reasoning LLMs overall outperform non-reasoning models in four test cases.Reasoning LLMs show excellent performance for CFD problems according to the tailored prompts,but their current capability in autonomous knowledge exploration and creation needs to be enhanced.
基金financially supported by the National Key R&D Program of China(Nos.2020YFA0908100 and 2023YFF1204401)Shenzhen Medical Research Fund(No.B2302037)+1 种基金the National Natural Science Foundation of China(Nos.22331003 and 21925102)Beijing National Laboratory for Molecular Sciences(No.BNLMS-CXXM-202006)。
文摘Thermophilic proteins maintain their structure and function at high temperatures,making them widely useful in industrial applications.Due to the complexity of experimental measurements,predicting the melting temperature(T_(m))of proteins has become a research hotspot.Previous methods rely on amino acid composition,physicochemical properties of proteins,and the optimal growth temperature(OGT)of hosts for T_(m)prediction.However,their performance in predicting T_(m)values for thermophilic proteins(T_(m)>60℃)are generally unsatisfactory due to data scarcity.Herein,we introduce T_(m)Pred,a T_(m)prediction model for thermophilic proteins,that combines protein language model,graph convolutional network and Graphormer module.For performance evaluation,T_(m)Pred achieves a root mean square error(RMSE)of 5.48℃,a pearson correlation coefficient(P)of 0.784,and a coefficient of determination(R~2)of 0.613,representing improvements of 19%,15%,and 32%,respectively,compared to the state-of-the-art predictive models like DeepTM.Furthermore,T_(m)Pred demonstrated strong generalization capability on independent blind test datasets.Overall,T_(m)Pred provides an effective tool for the mining and modification of thermophilic proteins by leveraging deep learning.
基金supported by the National Research Foundation of Korea grant funded by the Korea government(MSIT)(RS-2025-16067531:Kwangwon Ahn)Hankuk University of Foreign Studies Research Fund(0f 2025:Sihyun An).
文摘The nonlinearity of hedonic datasets demands flexible automated valuation models to appraise housing prices accurately,and artificial intelligence models have been employed in mass appraisal to this end.However,they have been referred to as“blackbox”models owing to difficulties associated with interpretation.In this study,we compared the results of traditional hedonic pricing models with those of machine learning algorithms,e.g.,random forest and deep neural network models.Commonly implemented measures,e.g.,Gini importance and permutation importance,provide only the magnitude of each explanatory variable’s importance,which results in ambiguous interpretability.To address this issue,we employed the SHapley Additive exPlanation(SHAP)method and explored its effectiveness through comparisons with traditionally explainable measures in hedonic pricing models.The results demonstrated that(1)the random forest model with the SHAP method could be a reliable instrument for appraising housing prices with high accuracy and sufficient interpretability,(2)the interpretable results retrieved from the SHAP method can be consolidated by the support of statistical evidence,and(3)housing characteristics and local amenities are primary contributors in property valuation,which is consistent with the findings of previous studies.Thus,our novel methodological framework and robust findings provide informative insights into the use of machine learning methods in property valuation based on the comparative analysis.
文摘Floods and storm surges pose significant threats to coastal regions worldwide,demanding timely and accurate early warning systems(EWS)for disaster preparedness.Traditional numerical and statistical methods often fall short in capturing complex,nonlinear,and real-time environmental dynamics.In recent years,machine learning(ML)and deep learning(DL)techniques have emerged as promising alternatives for enhancing the accuracy,speed,and scalability of EWS.This review critically evaluates the evolution of ML models—such as Artificial Neural Networks(ANN),Convolutional Neural Networks(CNN),and Long Short-Term Memory(LSTM)—in coastal flood prediction,highlighting their architectures,data requirements,performance metrics,and implementation challenges.A unique contribution of this work is the synthesis of real-time deployment challenges including latency,edge-cloud tradeoffs,and policy-level integration,areas often overlooked in prior literature.Furthermore,the review presents a comparative framework of model performance across different geographic and hydrologic settings,offering actionable insights for researchers and practitioners.Limitations of current AI-driven models,such as interpretability,data scarcity,and generalization across regions,are discussed in detail.Finally,the paper outlines future research directions including hybrid modelling,transfer learning,explainable AI,and policy-aware alert systems.By bridging technical performance and operational feasibility,this review aims to guide the development of next-generation intelligent EWS for resilient and adaptive coastal management.
基金supported by the National Natural Science Foundation of China under Grant 52325402,52274057,and 52074340the National Key R&D Program of China under Grant 2023YFB4104200+2 种基金the Major Scientific and Technological Projects of CNOOC under Grant CCL2022RCPS0397RSN111 Project under Grant B08028China Scholarship Council under Grant 202306450108.
文摘This study introduces a novel approach to addressing the challenges of high-dimensional variables and strong nonlinearity in reservoir production and layer configuration optimization.For the first time,relational machine learning models are applied in reservoir development optimization.Traditional regression-based models often struggle in complex scenarios,but the proposed relational and regression-based composite differential evolution(RRCODE)method combines a Gaussian naive Bayes relational model with a radial basis function network regression model.This integration effectively captures complex relationships in the optimization process,improving both accuracy and convergence speed.Experimental tests on a multi-layer multi-channel reservoir model,the Egg reservoir model,and a real-field reservoir model(the S reservoir)demonstrate that RRCODE significantly reduces water injection and production volumes while increasing economic returns and cumulative oil recovery.Moreover,the surrogate models employed in RRCODE exhibit lightweight characteristics with low computational overhead.These results highlight RRCODE's superior performance in the integrated optimization of reservoir production and layer configurations,offering more efficient and economically viable solutions for oilfield development.
基金supported by the National Natural Science Foundation of China(Grant No.:62101087)the China Postdoctoral Science Foundation(Grant No.:2021MD703942)+2 种基金the Chongqing Postdoctoral Research Project Special Funding,China(Grant No.:2021XM2016)the Science Foundation of Chongqing Municipal Commission of Education,China(Grant No.:KJQN202100642)the Chongqing Natural Science Foundation,China(Grant No.:cstc2021jcyj-msxmX0834).
文摘Drug repurposing offers a promising alternative to traditional drug development and significantly re-duces costs and timelines by identifying new therapeutic uses for existing drugs.However,the current approaches often rely on limited data sources and simplistic hypotheses,which restrict their ability to capture the multi-faceted nature of biological systems.This study introduces adaptive multi-view learning(AMVL),a novel methodology that integrates chemical-induced transcriptional profiles(CTPs),knowledge graph(KG)embeddings,and large language model(LLM)representations,to enhance drug repurposing predictions.AMVL incorporates an innovative similarity matrix expansion strategy and leverages multi-view learning(MVL),matrix factorization,and ensemble optimization techniques to integrate heterogeneous multi-source data.Comprehensive evaluations on benchmark datasets(Fdata-set,Cdataset,and Ydataset)and the large-scale iDrug dataset demonstrate that AMVL outperforms state-of-the-art(SOTA)methods,achieving superior accuracy in predicting drug-disease associations across multiple metrics.Literature-based validation further confirmed the model's predictive capabilities,with seven out of the top ten predictions corroborated by post-2011 evidence.To promote transparency and reproducibility,all data and codes used in this study were open-sourced,providing resources for pro-cessing CTPs,KG,and LLM-based similarity calculations,along with the complete AMVL algorithm and benchmarking procedures.By unifying diverse data modalities,AMVL offers a robust and scalable so-lution for accelerating drug discovery,fostering advancements in translational medicine and integrating multi-omics data.We aim to inspire further innovations in multi-source data integration and support the development of more precise and efficient strategies for advancing drug discovery and translational medicine.
基金supported by the Natural Science Foundation of Jiangsu province,China(BK20240937)the Belt and Road Special Foundation of the National Key Laboratory of Water Disaster Prevention(2022491411,2021491811)the Basal Research Fund of Central Public Welfare Scientific Institution of Nanjing Hydraulic Research Institute(Y223006).
文摘Understanding spatial heterogeneity in groundwater responses to multiple factors is critical for water resource management in coastal cities.Daily groundwater depth(GWD)data from 43 wells(2018-2022)were collected in three coastal cities in Jiangsu Province,China.Seasonal and Trend decomposition using Loess(STL)together with wavelet analysis and empirical mode decomposition were applied to identify tide-influenced wells while remaining wells were grouped by hierarchical clustering analysis(HCA).Machine learning models were developed to predict GWD,then their response to natural conditions and human activities was assessed by the Shapley Additive exPlanations(SHAP)method.Results showed that eXtreme Gradient Boosting(XGB)was superior to other models in terms of prediction performance and computational efficiency(R^(2)>0.95).GWD in Yancheng and southern Lianyungang were greater than those in Nantong,exhibiting larger fluctuations.Groundwater within 5 km of the coastline was affected by tides,with more pronounced effects in agricultural areas compared to urban areas.Shallow groundwater(3-7 m depth)responded immediately(0-1 day)to rainfall,primarily influenced by farmland and topography(slope and distance from rivers).Rainfall recharge to groundwater peaked at 50%farmland coverage,but this effect was suppressed by high temperatures(>30℃)which intensified as distance from rivers increased,especially in forest and grassland.Deep groundwater(>10 m)showed delayed responses to rainfall(1-4 days)and temperature(10-15 days),with GDP as the primary influence,followed by agricultural irrigation and population density.Farmland helped to maintain stable GWD in low population density regions,while excessive farmland coverage(>90%)led to overexploitation.In the early stages of GDP development,increased industrial and agricultural water demand led to GWD decline,but as GDP levels significantly improved,groundwater consumption pressure gradually eased.This methodological framework is applicable not only to coastal cities in China but also could be extended to coastal regions worldwide.
基金supported by the National Nature Science Foundation of China(U21A20166)the Science and Technology Development Foundation of Jilin Province(20230508095RC)+2 种基金the Major Science and Technology Projects of Jilin Province and Changchun City(20220301033GX)the Development and Reform Commission Foundation of Jilin Province(2023C034-3)the Interdisciplinary Integration and Innovation Project of JLU(JLUXKJC2020202).
文摘Dear Editor,Aiming at the consensus tracking problem of a class of unknown heterogeneous nonlinear multiagent systems(MASs)with input constraints,a novel data-driven iterative learning consensus control(ILCC)protocol based on zeroing neural networks(ZNNs)is proposed.First,a dynamic linearization data model(DLDM)is acquired via dynamic linearization technology(DLT).
基金funded by the Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R752),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The rapid shift to online education has introduced significant challenges to maintaining academic integrity in remote assessments,as traditional proctoring methods fall short in preventing cheating.The increase in cheating during online exams highlights the need for efficient,adaptable detection models to uphold academic credibility.This paper presents a comprehensive analysis of various deep learning models for cheating detection in online proctoring systems,evaluating their accuracy,efficiency,and adaptability.We benchmark several advanced architectures,including EfficientNet,MobileNetV2,ResNet variants and more,using two specialized datasets(OEP and OP)tailored for online proctoring contexts.Our findings reveal that EfficientNetB1 and YOLOv5 achieve top performance on the OP dataset,with EfficientNetB1 attaining a peak accuracy of 94.59% and YOLOv5 reaching a mean average precision(mAP@0.5)of 98.3%.For the OEP dataset,ResNet50-CBAM,YOLOv5 and EfficientNetB0 stand out,with ResNet50-CBAMachieving an accuracy of 93.61% and EfficientNetB0 showing robust detection performance with balanced accuracy and computational efficiency.These results underscore the importance of selectingmodels that balance accuracy and efficiency,supporting scalable,effective cheating detection in online assessments.
文摘BACKGROUND Severe dengue children with critical complications have been attributed to high mortality rates,varying from approximately 1%to over 20%.To date,there is a lack of data on machine-learning-based algorithms for predicting the risk of inhospital mortality in children with dengue shock syndrome(DSS).AIM To develop machine-learning models to estimate the risk of death in hospitalized children with DSS.METHODS This single-center retrospective study was conducted at tertiary Children’s Hospital No.2 in Viet Nam,between 2013 and 2022.The primary outcome was the in-hospital mortality rate in children with DSS admitted to the pediatric intensive care unit(PICU).Nine significant features were predetermined for further analysis using machine learning models.An oversampling method was used to enhance the model performance.Supervised models,including logistic regression,Naïve Bayes,Random Forest(RF),K-nearest neighbors,Decision Tree and Extreme Gradient Boosting(XGBoost),were employed to develop predictive models.The Shapley Additive Explanation was used to determine the degree of contribution of the features.RESULTS In total,1278 PICU-admitted children with complete data were included in the analysis.The median patient age was 8.1 years(interquartile range:5.4-10.7).Thirty-nine patients(3%)died.The RF and XGboost models demonstrated the highest performance.The Shapley Addictive Explanations model revealed that the most important predictive features included younger age,female patients,presence of underlying diseases,severe transaminitis,severe bleeding,low platelet counts requiring platelet transfusion,elevated levels of international normalized ratio,blood lactate and serum creatinine,large volume of resuscitation fluid and a high vasoactive inotropic score(>30).CONCLUSION We developed robust machine learning-based models to estimate the risk of death in hospitalized children with DSS.The study findings are applicable to the design of management schemes to enhance survival outcomes of patients with DSS.