Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic...Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic poses distinct challenges due to the language’s complex morphology,diglossia,and the scarcity of annotated datasets.This paper presents a hybrid approach to Arabic AES by combining text-based,vector-based,and embeddingbased similarity measures to improve essay scoring accuracy while minimizing the training data required.Using a large Arabic essay dataset categorized into thematic groups,the study conducted four experiments to evaluate the impact of feature selection,data size,and model performance.Experiment 1 established a baseline using a non-machine learning approach,selecting top-N correlated features to predict essay scores.The subsequent experiments employed 5-fold cross-validation.Experiment 2 showed that combining embedding-based,text-based,and vector-based features in a Random Forest(RF)model achieved an R2 of 88.92%and an accuracy of 83.3%within a 0.5-point tolerance.Experiment 3 further refined the feature selection process,demonstrating that 19 correlated features yielded optimal results,improving R2 to 88.95%.In Experiment 4,an optimal data efficiency training approach was introduced,where training data portions increased from 5%to 50%.The study found that using just 10%of the data achieved near-peak performance,with an R2 of 85.49%,emphasizing an effective trade-off between performance and computational costs.These findings highlight the potential of the hybrid approach for developing scalable Arabic AES systems,especially in low-resource environments,addressing linguistic challenges while ensuring efficient data usage.展开更多
Earth’s internal core and crustal magnetic fields,as measured by geomagnetic satellites like MSS-1(Macao Science Satellite-1)and Swarm,are vital for understanding core dynamics and tectonic evolution.To model these i...Earth’s internal core and crustal magnetic fields,as measured by geomagnetic satellites like MSS-1(Macao Science Satellite-1)and Swarm,are vital for understanding core dynamics and tectonic evolution.To model these internal magnetic fields accurately,data selection based on specific criteria is often employed to minimize the influence of rapidly changing current systems in the ionosphere and magnetosphere.However,the quantitative impact of various data selection criteria on internal geomagnetic field modeling is not well understood.This study aims to address this issue and provide a reference for constructing and applying geomagnetic field models.First,we collect the latest MSS-1 and Swarm satellite magnetic data and summarize widely used data selection criteria in geomagnetic field modeling.Second,we briefly describe the method to co-estimate the core,crustal,and large-scale magnetospheric fields using satellite magnetic data.Finally,we conduct a series of field modeling experiments with different data selection criteria to quantitatively estimate their influence.Our numerical experiments confirm that without selecting data from dark regions and geomagnetically quiet times,the resulting internal field differences at the Earth’s surface can range from tens to hundreds of nanotesla(nT).Additionally,we find that the uncertainties introduced into field models by different data selection criteria are significantly larger than the measurement accuracy of modern geomagnetic satellites.These uncertainties should be considered when utilizing constructed magnetic field models for scientific research and applications.展开更多
BACKGROUND Relieving pain is central to the early management of knee osteoarthritis,with a plethora of pharmacological agents licensed for this purpose.Intra-articular corticosteroid injections are a widely used optio...BACKGROUND Relieving pain is central to the early management of knee osteoarthritis,with a plethora of pharmacological agents licensed for this purpose.Intra-articular corticosteroid injections are a widely used option,albeit with variable efficacy.AIM To develop a machine learning(ML)model that predicts which patients will benefit from corticosteroid injections.METHODS Data from two prospective cohort studies[Osteoarthritis(OA)Initiative and Multicentre OA Study]was combined.The primary outcome was patientreported pain score following corticosteroid injection,assessed using the Western Ontario and McMaster Universities OA pain scale,with significant change defined using minimally clinically important difference and meaningful within person change.A ML algorithm was developed,utilizing linear discriminant analysis,to predict symptomatic improvement,and examine the association between pain scores and patient factors by calculating the sensitivity,specificity,positive predictive value,negative predictive value,accuracy,and F2 score.RESULTS A total of 330 patients were included,with a mean age of 63.4(SD:8.3).The mean Western Ontario and McMaster Universities OA pain score was 5.2(SD:4.1),with only 25.5%of patients achieving significant improvement in pain following corticosteroid injection.The ML model generated an accuracy of 67.8%(95%confidence interval:64.6%-70.9%),F1 score of 30.8%,and an area under the curve score of 0.60.CONCLUSION The model demonstrated feasibility to assist clinicians with decision-making in patient selection for corticosteroid injections.Further studies are required to improve the model prior to testing in clinical settings.展开更多
In clinical research,subgroup analysis can help identify patient groups that respond better or worse to specific treatments,improve therapeutic effect and safety,and is of great significance in precision medicine.This...In clinical research,subgroup analysis can help identify patient groups that respond better or worse to specific treatments,improve therapeutic effect and safety,and is of great significance in precision medicine.This article considers subgroup analysis methods for longitudinal data containing multiple covariates and biomarkers.We divide subgroups based on whether a linear combination of these biomarkers exceeds a predetermined threshold,and assess the heterogeneity of treatment effects across subgroups using the interaction between subgroups and exposure variables.Quantile regression is used to better characterize the global distribution of the response variable and sparsity penalties are imposed to achieve variable selection of covariates and biomarkers.The effectiveness of our proposed methodology for both variable selection and parameter estimation is verified through random simulations.Finally,we demonstrate the application of this method by analyzing data from the PA.3 trial,further illustrating the practicality of the method proposed in this paper.展开更多
Selecting proper descriptors(also known feature selection,FS)is key in the process of establishing mechanical properties prediction model of hot-rolled microalloyed steels by using machine learning(ML)algorithm.FS met...Selecting proper descriptors(also known feature selection,FS)is key in the process of establishing mechanical properties prediction model of hot-rolled microalloyed steels by using machine learning(ML)algorithm.FS methods based on data-driving can reduce the redundancy of data features and improve the prediction accuracy of mechanical properties.Based on the collected data of hot-rolled microalloyed steels,the association rules are used to mine the correlation information between the data.High-quality feature subsets are selected by the proposed FS method(FS method based on genetic algorithm embedding,GAMIC).Compared with the common FS method,it is shown on dataset that GAMIC selects feature subsets more appropriately.Six different ML algorithms are trained and tested for mechanical properties prediction.The result shows that the root-mean-square error of yield strength,tensile strength and elongation based on limit gradient enhancement(XGBoost)algorithm is 21.95 MPa,20.85 MPa and 1.96%,the correlation coefficient(R^(2))is 0.969,0.968 and 0.830,and the mean absolute error is 16.84 MPa,15.83 MPa and 1.48%,respectively,showing the best prediction performance.Finally,SHapley Additive exPlanation is used to further explore the influence of feature variables on mechanical properties.GAMIC feature selection method proposed is universal,which provides a basis for the development of high-precision mechanical property prediction model.展开更多
In this paper,a feature selection method for determining input parameters in antenna modeling is proposed.In antenna modeling,the input feature of artificial neural network(ANN)is geometric parameters.The selection cr...In this paper,a feature selection method for determining input parameters in antenna modeling is proposed.In antenna modeling,the input feature of artificial neural network(ANN)is geometric parameters.The selection criteria contain correlation and sensitivity between the geometric parameter and the electromagnetic(EM)response.Maximal information coefficient(MIC),an exploratory data mining tool,is introduced to evaluate both linear and nonlinear correlations.The EM response range is utilized to evaluate the sensitivity.The wide response range corresponding to varying values of a parameter implies the parameter is highly sensitive and the narrow response range suggests the parameter is insensitive.Only the parameter which is highly correlative and sensitive is selected as the input of ANN,and the sampling space of the model is highly reduced.The modeling of a wideband and circularly polarized antenna is studied as an example to verify the effectiveness of the proposed method.The number of input parameters decreases from8 to 4.The testing errors of|S_(11)|and axis ratio are reduced by8.74%and 8.95%,respectively,compared with the ANN with no feature selection.展开更多
Feature selection(FS)is a pivotal pre-processing step in developing data-driven models,influencing reliability,performance and optimization.Although existing FS techniques can yield high-performance metrics for certai...Feature selection(FS)is a pivotal pre-processing step in developing data-driven models,influencing reliability,performance and optimization.Although existing FS techniques can yield high-performance metrics for certain models,they do not invariably guarantee the extraction of the most critical or impactful features.Prior literature underscores the significance of equitable FS practices and has proposed diverse methodologies for the identification of appropriate features.However,the challenge of discerning the most relevant and influential features persists,particularly in the context of the exponential growth and heterogeneity of big data—a challenge that is increasingly salient in modern artificial intelligence(AI)applications.In response,this study introduces an innovative,automated statistical method termed Farea Similarity for Feature Selection(FSFS).The FSFS approach computes a similarity metric for each feature by benchmarking it against the record-wise mean,thereby finding feature dependencies and mitigating the influence of outliers that could potentially distort evaluation outcomes.Features are subsequently ranked according to their similarity scores,with the threshold established at the average similarity score.Notably,lower FSFS values indicate higher similarity and stronger data correlations,whereas higher values suggest lower similarity.The FSFS method is designed not only to yield reliable evaluation metrics but also to reduce data complexity without compromising model performance.Comparative analyses were performed against several established techniques,including Chi-squared(CS),Correlation Coefficient(CC),Genetic Algorithm(GA),Exhaustive Approach,Greedy Stepwise Approach,Gain Ratio,and Filtered Subset Eval,using a variety of datasets such as the Experimental Dataset,Breast Cancer Wisconsin(Original),KDD CUP 1999,NSL-KDD,UNSW-NB15,and Edge-IIoT.In the absence of the FSFS method,the highest classifier accuracies observed were 60.00%,95.13%,97.02%,98.17%,95.86%,and 94.62%for the respective datasets.When the FSFS technique was integrated with data normalization,encoding,balancing,and feature importance selection processes,accuracies improved to 100.00%,97.81%,98.63%,98.94%,94.27%,and 98.46%,respectively.The FSFS method,with a computational complexity of O(fn log n),demonstrates robust scalability and is well-suited for datasets of large size,ensuring efficient processing even when the number of features is substantial.By automatically eliminating outliers and redundant data,FSFS reduces computational overhead,resulting in faster training and improved model performance.Overall,the FSFS framework not only optimizes performance but also enhances the interpretability and explainability of data-driven models,thereby facilitating more trustworthy decision-making in AI applications.展开更多
With the development of More Electric Aircraft(MEA),the Permanent Magnet Synchronous Motor(PMSM)is widely used in the MEA field.The PMSM control system of MEA needs to consider the system reliability,and the inverter ...With the development of More Electric Aircraft(MEA),the Permanent Magnet Synchronous Motor(PMSM)is widely used in the MEA field.The PMSM control system of MEA needs to consider the system reliability,and the inverter switching frequency of the inverter is one of the impacting factors.At the same time,the control accuracy of the system also needs to be considered,and the torque ripple and flux ripple are usually considered to be its important indexes.This paper proposes a three-stage series Model Predictive Torque and Flux Control system(three-stage series MPTFC)based on fast optimal voltage vector selection to reduce switching frequency and suppress torque ripple and flux ripple.Firstly,the analytical model of the PMSM is established and the multi-stage series control method is used to reduce the switching frequency.Secondly,selectable voltage vectors are extended from 8 to 26 and a fast selection method for optimal voltage vector sectors is designed based on the hysteresis comparator,which can suppress the torque ripple and flux ripple to improve the control accuracy.Thirdly,a three-stage series control is obtained by expanding the two-stage series control using the P-Q torque decomposition theory.Finally,a model predictive torque and flux control experimental platform is built,and the feasibility and effectiveness of this method are verified through comparison experiments.展开更多
In this paper,we establish and study a single-species logistic model with impulsive age-selective harvesting.First,we prove the ultimate boundedness of the solutions of the system.Then,we obtain conditions for the asy...In this paper,we establish and study a single-species logistic model with impulsive age-selective harvesting.First,we prove the ultimate boundedness of the solutions of the system.Then,we obtain conditions for the asymptotic stability of the trivial solution and the positive periodic solution.Finally,numerical simulations are presented to validate our results.Our results show that age-selective harvesting is more conducive to sustainable population survival than non-age-selective harvesting.展开更多
The cloud data centres evolved with an issue of energy management due to the constant increase in size,complexity and enormous consumption of energy.Energy management is a challenging issue that is critical in cloud d...The cloud data centres evolved with an issue of energy management due to the constant increase in size,complexity and enormous consumption of energy.Energy management is a challenging issue that is critical in cloud data centres and an important concern of research for many researchers.In this paper,we proposed a cuckoo search(CS)-based optimisation technique for the virtual machine(VM)selection and a novel placement algorithm considering the different constraints.The energy consumption model and the simulation model have been implemented for the efficient selection of VM.The proposed model CSOA-VM not only lessens the violations at the service level agreement(SLA)level but also minimises the VM migrations.The proposed model also saves energy and the performance analysis shows that energy consumption obtained is 1.35 kWh,SLA violation is 9.2 and VM migration is about 268.Thus,there is an improvement in energy consumption of about 1.8%and a 2.1%improvement(reduction)in violations of SLA in comparison to existing techniques.展开更多
Heart disease prediction is a critical issue in healthcare,where accurate early diagnosis can save lives and reduce healthcare costs.The problem is inherently complex due to the high dimensionality of medical data,irr...Heart disease prediction is a critical issue in healthcare,where accurate early diagnosis can save lives and reduce healthcare costs.The problem is inherently complex due to the high dimensionality of medical data,irrelevant or redundant features,and the variability in risk factors such as age,lifestyle,andmedical history.These challenges often lead to inefficient and less accuratemodels.Traditional predictionmethodologies face limitations in effectively handling large feature sets and optimizing classification performance,which can result in overfitting poor generalization,and high computational cost.This work proposes a novel classification model for heart disease prediction that addresses these challenges by integrating feature selection through a Genetic Algorithm(GA)with an ensemble deep learning approach optimized using the Tunicate Swarm Algorithm(TSA).GA selects the most relevant features,reducing dimensionality and improvingmodel efficiency.Theselected features are then used to train an ensemble of deep learning models,where the TSA optimizes the weight of each model in the ensemble to enhance prediction accuracy.This hybrid approach addresses key challenges in the field,such as high dimensionality,redundant features,and classification performance,by introducing an efficient feature selection mechanism and optimizing the weighting of deep learning models in the ensemble.These enhancements result in a model that achieves superior accuracy,generalization,and efficiency compared to traditional methods.The proposed model demonstrated notable advancements in both prediction accuracy and computational efficiency over traditionalmodels.Specifically,it achieved an accuracy of 97.5%,a sensitivity of 97.2%,and a specificity of 97.8%.Additionally,with a 60-40 data split and 5-fold cross-validation,the model showed a significant reduction in training time(90 s),memory consumption(950 MB),and CPU usage(80%),highlighting its effectiveness in processing large,complex medical datasets for heart disease prediction.展开更多
To guarantee the computational accuracy of the finite element model,the strain-compensated Arrhenius-type model,modified Fields-Backofen(m-FB)model and modified Zerilli-Armstrong(m-ZA)model were established to predict...To guarantee the computational accuracy of the finite element model,the strain-compensated Arrhenius-type model,modified Fields-Backofen(m-FB)model and modified Zerilli-Armstrong(m-ZA)model were established to predict the hightemperature flow stress of as-cast low alloyed Al-0.5Cu,Al-1Si,and Al-1Si-0.5Cu.To determine the material constants of these three constitutive models,isothermal compression tests of the three aluminum alloys were carried out on a Gleeble-3800 thermal simulator.The prediction results of the constitutive model were compared with the experimental results to evaluate the prediction accuracy of the constitutive models,and to provide a basis for selecting the most suitable constitutive models(parameters)for the three alloys mentioned above.It is found that the strain-compensated Arrhenius model and m-ZA model can be regarded as the most suitable constitutive models for Al-0.5Cu and Al-1Si alloys,respectively,and these two constitutive models also can be applied to Al-1Si-0.5Cu alloy.However,the m-FB model can be applied to Al-0.5Cu,Al-1Si and Al-1Si-0.5Cu alloys only under high temperature and medium strain conditions.展开更多
This study explores the determinants of impact on ecology in Northern Tanzania.By examining key socio-economic,institutional,and structural factors influencing engagement the study provides insights in strengthening a...This study explores the determinants of impact on ecology in Northern Tanzania.By examining key socio-economic,institutional,and structural factors influencing engagement the study provides insights in strengthening agribusiness networks and improving livelihoods.Data was collected from 215 farmers and 320 traders through a multistage sampling procedure.Heckman AI sample selection model was used in data analysis whereby the findings showed key factors influencing farmers’decisions on ecology were gender and years of formal education at p<0.1,and access to finance and off-farm income at p<0.05.The degree of farmers participation in social groups was influenced by age,household size,off-farm income and business network at p<0.05,number of years in formal education and access to finance at p<0.01,and distance to the market at p<0.1.The decision of traders to impact on ecology was significantly influenced by age and trading experience at p<0.1.Meanwhile,the degree of their involvement in social groups was strongly affected by gender,formal education,and trust at p<0.01,as well as by access to finance and business networks at p<0.05.The study concluded that natural ecology is influenced by socio economic and structural factors but trust among group members determine the degree of participation.The study recommends that strategies to improve agribusiness networks should understand underlying causes of impact on ecology and strengthen available social groups to improve performance of farmers and traders.展开更多
The probability of phase formation was predicted using k-nearest neighbor algorithm(KNN)and artificial neural network algorithm(ANN).Additionally,the composition ranges of Ti,Cu,Ni,and Hf in 40 unknown amorphous alloy...The probability of phase formation was predicted using k-nearest neighbor algorithm(KNN)and artificial neural network algorithm(ANN).Additionally,the composition ranges of Ti,Cu,Ni,and Hf in 40 unknown amorphous alloy composites(AACs)were predicted using ANN.The predicted alloys were then experimentally verified through X-ray diffraction(XRD)and high-resolution transmission electron microscopy(HRTEM).The prediction accuracies of the ANN for AM and IM phases are 93.12%and 85.16%,respectively,while the prediction accuracies of KNN for AM and IM phases are 93%and 84%,respectively.It is observed that when the contents of Ti,Cu,Ni,and Hf fall within the ranges of 32.7−34.5 at.%,16.4−17.3 at.%,30.9−32.7 at.%,and 17.3−18.3 at.%,respectively,it is more likely to form AACs.Based on the results of XRD and HRTEM,the Ti_(34)Cu17Ni_(31.36)Hf_(17.64)and Ti_(36)Cu_(18)Ni_(29.44)Hf_(16.56)alloys are identified as good AACs,which are in closely consistent with the predicted amorphous alloy compositions.展开更多
The significance of accurately forecasting natural gas prices is far-reaching and significant,not only for the stable operation of the energy market,but also as a key element in promoting sustainable development and a...The significance of accurately forecasting natural gas prices is far-reaching and significant,not only for the stable operation of the energy market,but also as a key element in promoting sustainable development and addressing environmental challenges.However,natural gas prices are affected by multiple source factors,presenting complex,unstable nonlinear characteristics hindering the improvement of the prediction accuracy of existing models.To address this issue,this study proposes an innovative multivariate combined forecasting model for natural gas prices.Initially,the study meticulously identifies and introduces 16 variables impacting natural gas prices across five crucial dimensions:the production,marketing,commodities,political and economic indicators of the United States and temperature.Subsequently,this study employs the least absolute shrinkage and selection operator,grey relation analysis,and random forest for dimensionality reduction,effectively screening out the most influential key variables to serve as input features for the subsequent learning model.Building upon this foundation,a suite of machine learning models is constructed to ensure precise natural gas price prediction.To further elevate the predictive performance,an intelligent algorithm for parameter optimization is incorporated,addressing potential limitations of individual models.To thoroughly assess the prediction accuracy of the proposed model,this study conducts three experiments using monthly natural gas trading prices.These experiments incorporate 19 benchmark models for comparative analysis,utilizing five evaluation metrics to quantify forecasting effectiveness.Furthermore,this study conducts in-depth validation of the proposed model's effectiveness through hypothesis testing,discussions on the improvement ratio of forecasting performance,and case studies on other energy prices.The empirical results demonstrate that the multivariate combined forecasting method developed in this study surpasses other comparative models in forecasting accuracy.It offers new perspectives and methodologies for natural gas price forecasting while also providing valuable insights for other energy price forecasting studies.展开更多
Model evaluation using benchmark datasets is an important method to measure the capability of large language models(LLMs)in specific domains,and it is mainly used to assess the knowledge and reasoning abilities of LLM...Model evaluation using benchmark datasets is an important method to measure the capability of large language models(LLMs)in specific domains,and it is mainly used to assess the knowledge and reasoning abilities of LLMs.Therefore,in order to better assess the capability of LLMs in the agricultural domain,Agri-Eval was proposed as a benchmark for assessing the knowledge and reasoning ability of LLMs in agriculture.The assessment dataset used in Agri-Eval covered seven major disciplines in the agricultural domain:crop science,horticulture,plant protection,animal husbandry,forest science,aquaculture science,and grass science,and contained a total of 2283 questions.Among domestic general-purpose LLMs,DeepSeek R1 performed best with an accuracy rate of 75.49%.In the realm of international general-purpose LLMs,Gemini 2.0 pro exp 0205 standed out as the top performer,achieving an accuracy rate of 74.28%.As an LLMs in agriculture vertical,Shennong V2.0 outperformed all the LLMs in China,and the answer accuracy rate of agricultural knowledge exceeded that of all the existing general-purpose LLMs.The launch of Agri-Eval helped the LLM developers to comprehensively evaluate the model's capability in the field of agriculture through a variety of tasks and tests to promote the development of the LLMs in the field of agriculture.展开更多
In recent years,there has been an increasing need for climate information across diverse sectors of society.This demand has arisen from the necessity to adapt to and mitigate the impacts of climate variability and cha...In recent years,there has been an increasing need for climate information across diverse sectors of society.This demand has arisen from the necessity to adapt to and mitigate the impacts of climate variability and change.Likewise,this period has seen a significant increase in our understanding of the physical processes and mechanisms that drive precipitation and its variability across different regions of Africa.By leveraging a large volume of climate model outputs,numerous studies have investigated the model representation of African precipitation as well as underlying physical processes.These studies have assessed whether the physical processes are well depicted and whether the models are fit for informing mitigation and adaptation strategies.This paper provides a review of the progress in precipitation simulation overAfrica in state-of-the-science climate models and discusses the major issues and challenges that remain.展开更多
The Financial Technology(FinTech)sector has witnessed rapid growth,resulting in increasingly complex and high-volume digital transactions.Although this expansion improves efficiency and accessibility,it also introduce...The Financial Technology(FinTech)sector has witnessed rapid growth,resulting in increasingly complex and high-volume digital transactions.Although this expansion improves efficiency and accessibility,it also introduces significant vulnerabilities,including fraud,money laundering,and market manipulation.Traditional anomaly detection techniques often fail to capture the relational and dynamic characteristics of financial data.Graph Neural Networks(GNNs),capable of modeling intricate interdependencies among entities,have emerged as a powerful framework for detecting subtle and sophisticated anomalies.However,the high-dimensionality and inherent noise of FinTech datasets demand robust feature selection strategies to improve model scalability,performance,and interpretability.This paper presents a comprehensive survey of GNN-based approaches for anomaly detection in FinTech,with an emphasis on the synergistic role of feature selection.We examine the theoretical foundations of GNNs,review state-of-the-art feature selection techniques,analyze their integration with GNNs,and categorize prevalent anomaly types in FinTech applications.In addition,we discuss practical implementation challenges,highlight representative case studies,and propose future research directions to advance the field of graph-based anomaly detection in financial systems.展开更多
Utilizing finite element analysis,the ballistic protection provided by a combination of perforated D-shaped and base armor plates,collectively referred to as radiator armor,is evaluated.ANSYS Explicit Dynamics is empl...Utilizing finite element analysis,the ballistic protection provided by a combination of perforated D-shaped and base armor plates,collectively referred to as radiator armor,is evaluated.ANSYS Explicit Dynamics is employed to simulate the ballistic impact of 7.62 mm armor-piercing projectiles on Aluminum AA5083-H116 and Steel Secure 500 armors,focusing on the evaluation of material deformation and penetration resistance at varying impact points.While the D-shaped armor plate is penetrated by the armor-piercing projectiles,the combination of the perforated D-shaped and base armor plates successfully halts penetration.A numerical model based on the finite element method is developed using software such as SolidWorks and ANSYS to analyze the interaction between radiator armor and bullet.The perforated design of radiator armor is to maintain airflow for radiator function,with hole sizes smaller than the bullet core diameter to protect radiator assemblies.Predictions are made regarding the brittle fracture resulting from the projectile core′s bending due to asymmetric impact,and the resulting fragments failed to penetrate the perforated base armor plate.Craters are formed on the surface of the perforated D-shaped armor plate due to the impact of projectile fragments.The numerical model accurately predicts hole growth and projectile penetration upon impact with the armor,demonstrating effective protection of the radiator assemblies by the radiator armor.展开更多
基金funded by Deanship of Graduate studies and Scientific Research at Jouf University under grant No.(DGSSR-2024-02-01264).
文摘Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic poses distinct challenges due to the language’s complex morphology,diglossia,and the scarcity of annotated datasets.This paper presents a hybrid approach to Arabic AES by combining text-based,vector-based,and embeddingbased similarity measures to improve essay scoring accuracy while minimizing the training data required.Using a large Arabic essay dataset categorized into thematic groups,the study conducted four experiments to evaluate the impact of feature selection,data size,and model performance.Experiment 1 established a baseline using a non-machine learning approach,selecting top-N correlated features to predict essay scores.The subsequent experiments employed 5-fold cross-validation.Experiment 2 showed that combining embedding-based,text-based,and vector-based features in a Random Forest(RF)model achieved an R2 of 88.92%and an accuracy of 83.3%within a 0.5-point tolerance.Experiment 3 further refined the feature selection process,demonstrating that 19 correlated features yielded optimal results,improving R2 to 88.95%.In Experiment 4,an optimal data efficiency training approach was introduced,where training data portions increased from 5%to 50%.The study found that using just 10%of the data achieved near-peak performance,with an R2 of 85.49%,emphasizing an effective trade-off between performance and computational costs.These findings highlight the potential of the hybrid approach for developing scalable Arabic AES systems,especially in low-resource environments,addressing linguistic challenges while ensuring efficient data usage.
基金supported by the National Natural Science Foundation of China(42250101)the Macao Foundation。
文摘Earth’s internal core and crustal magnetic fields,as measured by geomagnetic satellites like MSS-1(Macao Science Satellite-1)and Swarm,are vital for understanding core dynamics and tectonic evolution.To model these internal magnetic fields accurately,data selection based on specific criteria is often employed to minimize the influence of rapidly changing current systems in the ionosphere and magnetosphere.However,the quantitative impact of various data selection criteria on internal geomagnetic field modeling is not well understood.This study aims to address this issue and provide a reference for constructing and applying geomagnetic field models.First,we collect the latest MSS-1 and Swarm satellite magnetic data and summarize widely used data selection criteria in geomagnetic field modeling.Second,we briefly describe the method to co-estimate the core,crustal,and large-scale magnetospheric fields using satellite magnetic data.Finally,we conduct a series of field modeling experiments with different data selection criteria to quantitatively estimate their influence.Our numerical experiments confirm that without selecting data from dark regions and geomagnetically quiet times,the resulting internal field differences at the Earth’s surface can range from tens to hundreds of nanotesla(nT).Additionally,we find that the uncertainties introduced into field models by different data selection criteria are significantly larger than the measurement accuracy of modern geomagnetic satellites.These uncertainties should be considered when utilizing constructed magnetic field models for scientific research and applications.
基金Supported by National Institute For Health and Care Research,No.NIHR302632.
文摘BACKGROUND Relieving pain is central to the early management of knee osteoarthritis,with a plethora of pharmacological agents licensed for this purpose.Intra-articular corticosteroid injections are a widely used option,albeit with variable efficacy.AIM To develop a machine learning(ML)model that predicts which patients will benefit from corticosteroid injections.METHODS Data from two prospective cohort studies[Osteoarthritis(OA)Initiative and Multicentre OA Study]was combined.The primary outcome was patientreported pain score following corticosteroid injection,assessed using the Western Ontario and McMaster Universities OA pain scale,with significant change defined using minimally clinically important difference and meaningful within person change.A ML algorithm was developed,utilizing linear discriminant analysis,to predict symptomatic improvement,and examine the association between pain scores and patient factors by calculating the sensitivity,specificity,positive predictive value,negative predictive value,accuracy,and F2 score.RESULTS A total of 330 patients were included,with a mean age of 63.4(SD:8.3).The mean Western Ontario and McMaster Universities OA pain score was 5.2(SD:4.1),with only 25.5%of patients achieving significant improvement in pain following corticosteroid injection.The ML model generated an accuracy of 67.8%(95%confidence interval:64.6%-70.9%),F1 score of 30.8%,and an area under the curve score of 0.60.CONCLUSION The model demonstrated feasibility to assist clinicians with decision-making in patient selection for corticosteroid injections.Further studies are required to improve the model prior to testing in clinical settings.
基金Supported by the Natural Science Foundation of Fujian Province(2022J011177,2024J01903)the Key Project of Fujian Provincial Education Department(JZ230054)。
文摘In clinical research,subgroup analysis can help identify patient groups that respond better or worse to specific treatments,improve therapeutic effect and safety,and is of great significance in precision medicine.This article considers subgroup analysis methods for longitudinal data containing multiple covariates and biomarkers.We divide subgroups based on whether a linear combination of these biomarkers exceeds a predetermined threshold,and assess the heterogeneity of treatment effects across subgroups using the interaction between subgroups and exposure variables.Quantile regression is used to better characterize the global distribution of the response variable and sparsity penalties are imposed to achieve variable selection of covariates and biomarkers.The effectiveness of our proposed methodology for both variable selection and parameter estimation is verified through random simulations.Finally,we demonstrate the application of this method by analyzing data from the PA.3 trial,further illustrating the practicality of the method proposed in this paper.
基金supported by the National Key Research and Development Program of China(Grant No.2021YFB3702404)the National Natural Science Foundation of China(Grant No.52104370)+4 种基金the Reviving-Liaoning Excellence Plan(XLYC2203186)Science and Technology Special Projects of Liaoning Province(Grant No.2022JH25/10200001)the Postdoctoral Research Fund for Northeastern(Grant No.20210203)Independent Projects of Basic Scientific Research(ZZ2021005)CITIC Niobium Steel Development Award Fund(2022-M1824).
文摘Selecting proper descriptors(also known feature selection,FS)is key in the process of establishing mechanical properties prediction model of hot-rolled microalloyed steels by using machine learning(ML)algorithm.FS methods based on data-driving can reduce the redundancy of data features and improve the prediction accuracy of mechanical properties.Based on the collected data of hot-rolled microalloyed steels,the association rules are used to mine the correlation information between the data.High-quality feature subsets are selected by the proposed FS method(FS method based on genetic algorithm embedding,GAMIC).Compared with the common FS method,it is shown on dataset that GAMIC selects feature subsets more appropriately.Six different ML algorithms are trained and tested for mechanical properties prediction.The result shows that the root-mean-square error of yield strength,tensile strength and elongation based on limit gradient enhancement(XGBoost)algorithm is 21.95 MPa,20.85 MPa and 1.96%,the correlation coefficient(R^(2))is 0.969,0.968 and 0.830,and the mean absolute error is 16.84 MPa,15.83 MPa and 1.48%,respectively,showing the best prediction performance.Finally,SHapley Additive exPlanation is used to further explore the influence of feature variables on mechanical properties.GAMIC feature selection method proposed is universal,which provides a basis for the development of high-precision mechanical property prediction model.
基金National Natural Science Foundation of China(62161048)Sichuan Science and Technology Program(2022NSFSC0547,2022ZYD0109)。
文摘In this paper,a feature selection method for determining input parameters in antenna modeling is proposed.In antenna modeling,the input feature of artificial neural network(ANN)is geometric parameters.The selection criteria contain correlation and sensitivity between the geometric parameter and the electromagnetic(EM)response.Maximal information coefficient(MIC),an exploratory data mining tool,is introduced to evaluate both linear and nonlinear correlations.The EM response range is utilized to evaluate the sensitivity.The wide response range corresponding to varying values of a parameter implies the parameter is highly sensitive and the narrow response range suggests the parameter is insensitive.Only the parameter which is highly correlative and sensitive is selected as the input of ANN,and the sampling space of the model is highly reduced.The modeling of a wideband and circularly polarized antenna is studied as an example to verify the effectiveness of the proposed method.The number of input parameters decreases from8 to 4.The testing errors of|S_(11)|and axis ratio are reduced by8.74%and 8.95%,respectively,compared with the ANN with no feature selection.
文摘Feature selection(FS)is a pivotal pre-processing step in developing data-driven models,influencing reliability,performance and optimization.Although existing FS techniques can yield high-performance metrics for certain models,they do not invariably guarantee the extraction of the most critical or impactful features.Prior literature underscores the significance of equitable FS practices and has proposed diverse methodologies for the identification of appropriate features.However,the challenge of discerning the most relevant and influential features persists,particularly in the context of the exponential growth and heterogeneity of big data—a challenge that is increasingly salient in modern artificial intelligence(AI)applications.In response,this study introduces an innovative,automated statistical method termed Farea Similarity for Feature Selection(FSFS).The FSFS approach computes a similarity metric for each feature by benchmarking it against the record-wise mean,thereby finding feature dependencies and mitigating the influence of outliers that could potentially distort evaluation outcomes.Features are subsequently ranked according to their similarity scores,with the threshold established at the average similarity score.Notably,lower FSFS values indicate higher similarity and stronger data correlations,whereas higher values suggest lower similarity.The FSFS method is designed not only to yield reliable evaluation metrics but also to reduce data complexity without compromising model performance.Comparative analyses were performed against several established techniques,including Chi-squared(CS),Correlation Coefficient(CC),Genetic Algorithm(GA),Exhaustive Approach,Greedy Stepwise Approach,Gain Ratio,and Filtered Subset Eval,using a variety of datasets such as the Experimental Dataset,Breast Cancer Wisconsin(Original),KDD CUP 1999,NSL-KDD,UNSW-NB15,and Edge-IIoT.In the absence of the FSFS method,the highest classifier accuracies observed were 60.00%,95.13%,97.02%,98.17%,95.86%,and 94.62%for the respective datasets.When the FSFS technique was integrated with data normalization,encoding,balancing,and feature importance selection processes,accuracies improved to 100.00%,97.81%,98.63%,98.94%,94.27%,and 98.46%,respectively.The FSFS method,with a computational complexity of O(fn log n),demonstrates robust scalability and is well-suited for datasets of large size,ensuring efficient processing even when the number of features is substantial.By automatically eliminating outliers and redundant data,FSFS reduces computational overhead,resulting in faster training and improved model performance.Overall,the FSFS framework not only optimizes performance but also enhances the interpretability and explainability of data-driven models,thereby facilitating more trustworthy decision-making in AI applications.
基金co-supported by the National Natural Science Foundation of China(No.52477063)the National Key Research and Development Program of China(No.2023YFF0719100)。
文摘With the development of More Electric Aircraft(MEA),the Permanent Magnet Synchronous Motor(PMSM)is widely used in the MEA field.The PMSM control system of MEA needs to consider the system reliability,and the inverter switching frequency of the inverter is one of the impacting factors.At the same time,the control accuracy of the system also needs to be considered,and the torque ripple and flux ripple are usually considered to be its important indexes.This paper proposes a three-stage series Model Predictive Torque and Flux Control system(three-stage series MPTFC)based on fast optimal voltage vector selection to reduce switching frequency and suppress torque ripple and flux ripple.Firstly,the analytical model of the PMSM is established and the multi-stage series control method is used to reduce the switching frequency.Secondly,selectable voltage vectors are extended from 8 to 26 and a fast selection method for optimal voltage vector sectors is designed based on the hysteresis comparator,which can suppress the torque ripple and flux ripple to improve the control accuracy.Thirdly,a three-stage series control is obtained by expanding the two-stage series control using the P-Q torque decomposition theory.Finally,a model predictive torque and flux control experimental platform is built,and the feasibility and effectiveness of this method are verified through comparison experiments.
基金Supported by the National Natural Science Foundation of China(12261018)Universities Key Laboratory of Mathematical Modeling and Data Mining in Guizhou Province(2023013)。
文摘In this paper,we establish and study a single-species logistic model with impulsive age-selective harvesting.First,we prove the ultimate boundedness of the solutions of the system.Then,we obtain conditions for the asymptotic stability of the trivial solution and the positive periodic solution.Finally,numerical simulations are presented to validate our results.Our results show that age-selective harvesting is more conducive to sustainable population survival than non-age-selective harvesting.
文摘The cloud data centres evolved with an issue of energy management due to the constant increase in size,complexity and enormous consumption of energy.Energy management is a challenging issue that is critical in cloud data centres and an important concern of research for many researchers.In this paper,we proposed a cuckoo search(CS)-based optimisation technique for the virtual machine(VM)selection and a novel placement algorithm considering the different constraints.The energy consumption model and the simulation model have been implemented for the efficient selection of VM.The proposed model CSOA-VM not only lessens the violations at the service level agreement(SLA)level but also minimises the VM migrations.The proposed model also saves energy and the performance analysis shows that energy consumption obtained is 1.35 kWh,SLA violation is 9.2 and VM migration is about 268.Thus,there is an improvement in energy consumption of about 1.8%and a 2.1%improvement(reduction)in violations of SLA in comparison to existing techniques.
文摘Heart disease prediction is a critical issue in healthcare,where accurate early diagnosis can save lives and reduce healthcare costs.The problem is inherently complex due to the high dimensionality of medical data,irrelevant or redundant features,and the variability in risk factors such as age,lifestyle,andmedical history.These challenges often lead to inefficient and less accuratemodels.Traditional predictionmethodologies face limitations in effectively handling large feature sets and optimizing classification performance,which can result in overfitting poor generalization,and high computational cost.This work proposes a novel classification model for heart disease prediction that addresses these challenges by integrating feature selection through a Genetic Algorithm(GA)with an ensemble deep learning approach optimized using the Tunicate Swarm Algorithm(TSA).GA selects the most relevant features,reducing dimensionality and improvingmodel efficiency.Theselected features are then used to train an ensemble of deep learning models,where the TSA optimizes the weight of each model in the ensemble to enhance prediction accuracy.This hybrid approach addresses key challenges in the field,such as high dimensionality,redundant features,and classification performance,by introducing an efficient feature selection mechanism and optimizing the weighting of deep learning models in the ensemble.These enhancements result in a model that achieves superior accuracy,generalization,and efficiency compared to traditional methods.The proposed model demonstrated notable advancements in both prediction accuracy and computational efficiency over traditionalmodels.Specifically,it achieved an accuracy of 97.5%,a sensitivity of 97.2%,and a specificity of 97.8%.Additionally,with a 60-40 data split and 5-fold cross-validation,the model showed a significant reduction in training time(90 s),memory consumption(950 MB),and CPU usage(80%),highlighting its effectiveness in processing large,complex medical datasets for heart disease prediction.
基金supported by the Natural Science Foundation of Xinjiang Uygur Autonomous Region of China“Study on constitutive behavior of Al-xSi-yCu high purity aluminum alloy billets for target materials”(2020D01C023).
文摘To guarantee the computational accuracy of the finite element model,the strain-compensated Arrhenius-type model,modified Fields-Backofen(m-FB)model and modified Zerilli-Armstrong(m-ZA)model were established to predict the hightemperature flow stress of as-cast low alloyed Al-0.5Cu,Al-1Si,and Al-1Si-0.5Cu.To determine the material constants of these three constitutive models,isothermal compression tests of the three aluminum alloys were carried out on a Gleeble-3800 thermal simulator.The prediction results of the constitutive model were compared with the experimental results to evaluate the prediction accuracy of the constitutive models,and to provide a basis for selecting the most suitable constitutive models(parameters)for the three alloys mentioned above.It is found that the strain-compensated Arrhenius model and m-ZA model can be regarded as the most suitable constitutive models for Al-0.5Cu and Al-1Si alloys,respectively,and these two constitutive models also can be applied to Al-1Si-0.5Cu alloy.However,the m-FB model can be applied to Al-0.5Cu,Al-1Si and Al-1Si-0.5Cu alloys only under high temperature and medium strain conditions.
基金financed as part of the project“Development of a methodology for instrumental base formation for analysis and modeling of the spatial socio-economic development of systems based on internal reserves in the context of digitalization”(FSEG-2023-0008).
文摘This study explores the determinants of impact on ecology in Northern Tanzania.By examining key socio-economic,institutional,and structural factors influencing engagement the study provides insights in strengthening agribusiness networks and improving livelihoods.Data was collected from 215 farmers and 320 traders through a multistage sampling procedure.Heckman AI sample selection model was used in data analysis whereby the findings showed key factors influencing farmers’decisions on ecology were gender and years of formal education at p<0.1,and access to finance and off-farm income at p<0.05.The degree of farmers participation in social groups was influenced by age,household size,off-farm income and business network at p<0.05,number of years in formal education and access to finance at p<0.01,and distance to the market at p<0.1.The decision of traders to impact on ecology was significantly influenced by age and trading experience at p<0.1.Meanwhile,the degree of their involvement in social groups was strongly affected by gender,formal education,and trust at p<0.01,as well as by access to finance and business networks at p<0.05.The study concluded that natural ecology is influenced by socio economic and structural factors but trust among group members determine the degree of participation.The study recommends that strategies to improve agribusiness networks should understand underlying causes of impact on ecology and strengthen available social groups to improve performance of farmers and traders.
基金supported by the National Natural Science Foundation of China(No.51601019)the Guangdong Basic and Applied Basic Research Foundation,China(No.2022A1515010233)+1 种基金the Key Project of Shaanxi Province of Qinchuangyuan“Scientist and Engineer”Team Construction,China(No.2023KXJ-123)the Natural Science Foundation of Shaanxi Province,China(No.2024JC-YBMS-014).
文摘The probability of phase formation was predicted using k-nearest neighbor algorithm(KNN)and artificial neural network algorithm(ANN).Additionally,the composition ranges of Ti,Cu,Ni,and Hf in 40 unknown amorphous alloy composites(AACs)were predicted using ANN.The predicted alloys were then experimentally verified through X-ray diffraction(XRD)and high-resolution transmission electron microscopy(HRTEM).The prediction accuracies of the ANN for AM and IM phases are 93.12%and 85.16%,respectively,while the prediction accuracies of KNN for AM and IM phases are 93%and 84%,respectively.It is observed that when the contents of Ti,Cu,Ni,and Hf fall within the ranges of 32.7−34.5 at.%,16.4−17.3 at.%,30.9−32.7 at.%,and 17.3−18.3 at.%,respectively,it is more likely to form AACs.Based on the results of XRD and HRTEM,the Ti_(34)Cu17Ni_(31.36)Hf_(17.64)and Ti_(36)Cu_(18)Ni_(29.44)Hf_(16.56)alloys are identified as good AACs,which are in closely consistent with the predicted amorphous alloy compositions.
基金supported by the funding from the Humanities and Social Science Fund of Ministry of Education of China(No.22YJCZH028)National Natural Science Foundation of China(Grant No.72303001)+3 种基金Fundamental Research Funds for the Central Universities(No.JUSRP124043)Anhui Provincial Excellent Young Scientists Fund for Universities(No.2024AH030001)Anhui Education Department Excellent Young Teachers Fund(No.YQYB2024021)Basic Research Program of Jiangsu(No.BK20251593)。
文摘The significance of accurately forecasting natural gas prices is far-reaching and significant,not only for the stable operation of the energy market,but also as a key element in promoting sustainable development and addressing environmental challenges.However,natural gas prices are affected by multiple source factors,presenting complex,unstable nonlinear characteristics hindering the improvement of the prediction accuracy of existing models.To address this issue,this study proposes an innovative multivariate combined forecasting model for natural gas prices.Initially,the study meticulously identifies and introduces 16 variables impacting natural gas prices across five crucial dimensions:the production,marketing,commodities,political and economic indicators of the United States and temperature.Subsequently,this study employs the least absolute shrinkage and selection operator,grey relation analysis,and random forest for dimensionality reduction,effectively screening out the most influential key variables to serve as input features for the subsequent learning model.Building upon this foundation,a suite of machine learning models is constructed to ensure precise natural gas price prediction.To further elevate the predictive performance,an intelligent algorithm for parameter optimization is incorporated,addressing potential limitations of individual models.To thoroughly assess the prediction accuracy of the proposed model,this study conducts three experiments using monthly natural gas trading prices.These experiments incorporate 19 benchmark models for comparative analysis,utilizing five evaluation metrics to quantify forecasting effectiveness.Furthermore,this study conducts in-depth validation of the proposed model's effectiveness through hypothesis testing,discussions on the improvement ratio of forecasting performance,and case studies on other energy prices.The empirical results demonstrate that the multivariate combined forecasting method developed in this study surpasses other comparative models in forecasting accuracy.It offers new perspectives and methodologies for natural gas price forecasting while also providing valuable insights for other energy price forecasting studies.
文摘Model evaluation using benchmark datasets is an important method to measure the capability of large language models(LLMs)in specific domains,and it is mainly used to assess the knowledge and reasoning abilities of LLMs.Therefore,in order to better assess the capability of LLMs in the agricultural domain,Agri-Eval was proposed as a benchmark for assessing the knowledge and reasoning ability of LLMs in agriculture.The assessment dataset used in Agri-Eval covered seven major disciplines in the agricultural domain:crop science,horticulture,plant protection,animal husbandry,forest science,aquaculture science,and grass science,and contained a total of 2283 questions.Among domestic general-purpose LLMs,DeepSeek R1 performed best with an accuracy rate of 75.49%.In the realm of international general-purpose LLMs,Gemini 2.0 pro exp 0205 standed out as the top performer,achieving an accuracy rate of 74.28%.As an LLMs in agriculture vertical,Shennong V2.0 outperformed all the LLMs in China,and the answer accuracy rate of agricultural knowledge exceeded that of all the existing general-purpose LLMs.The launch of Agri-Eval helped the LLM developers to comprehensively evaluate the model's capability in the field of agriculture through a variety of tasks and tests to promote the development of the LLMs in the field of agriculture.
基金the World Climate Research Programme(WCRP),Climate Variability and Predictability(CLIVAR),and Global Energy and Water Exchanges(GEWEX)for facilitating the coordination of African monsoon researchsupport from the Center for Earth System Modeling,Analysis,and Data at the Pennsylvania State Universitythe support of the Office of Science of the U.S.Department of Energy Biological and Environmental Research as part of the Regional&Global Model Analysis(RGMA)program area。
文摘In recent years,there has been an increasing need for climate information across diverse sectors of society.This demand has arisen from the necessity to adapt to and mitigate the impacts of climate variability and change.Likewise,this period has seen a significant increase in our understanding of the physical processes and mechanisms that drive precipitation and its variability across different regions of Africa.By leveraging a large volume of climate model outputs,numerous studies have investigated the model representation of African precipitation as well as underlying physical processes.These studies have assessed whether the physical processes are well depicted and whether the models are fit for informing mitigation and adaptation strategies.This paper provides a review of the progress in precipitation simulation overAfrica in state-of-the-science climate models and discusses the major issues and challenges that remain.
基金supported by Ho Chi Minh City Open University,Vietnam under grant number E2024.02.1CD and Suan Sunandha Rajabhat University,Thailand.
文摘The Financial Technology(FinTech)sector has witnessed rapid growth,resulting in increasingly complex and high-volume digital transactions.Although this expansion improves efficiency and accessibility,it also introduces significant vulnerabilities,including fraud,money laundering,and market manipulation.Traditional anomaly detection techniques often fail to capture the relational and dynamic characteristics of financial data.Graph Neural Networks(GNNs),capable of modeling intricate interdependencies among entities,have emerged as a powerful framework for detecting subtle and sophisticated anomalies.However,the high-dimensionality and inherent noise of FinTech datasets demand robust feature selection strategies to improve model scalability,performance,and interpretability.This paper presents a comprehensive survey of GNN-based approaches for anomaly detection in FinTech,with an emphasis on the synergistic role of feature selection.We examine the theoretical foundations of GNNs,review state-of-the-art feature selection techniques,analyze their integration with GNNs,and categorize prevalent anomaly types in FinTech applications.In addition,we discuss practical implementation challenges,highlight representative case studies,and propose future research directions to advance the field of graph-based anomaly detection in financial systems.
文摘Utilizing finite element analysis,the ballistic protection provided by a combination of perforated D-shaped and base armor plates,collectively referred to as radiator armor,is evaluated.ANSYS Explicit Dynamics is employed to simulate the ballistic impact of 7.62 mm armor-piercing projectiles on Aluminum AA5083-H116 and Steel Secure 500 armors,focusing on the evaluation of material deformation and penetration resistance at varying impact points.While the D-shaped armor plate is penetrated by the armor-piercing projectiles,the combination of the perforated D-shaped and base armor plates successfully halts penetration.A numerical model based on the finite element method is developed using software such as SolidWorks and ANSYS to analyze the interaction between radiator armor and bullet.The perforated design of radiator armor is to maintain airflow for radiator function,with hole sizes smaller than the bullet core diameter to protect radiator assemblies.Predictions are made regarding the brittle fracture resulting from the projectile core′s bending due to asymmetric impact,and the resulting fragments failed to penetrate the perforated base armor plate.Craters are formed on the surface of the perforated D-shaped armor plate due to the impact of projectile fragments.The numerical model accurately predicts hole growth and projectile penetration upon impact with the armor,demonstrating effective protection of the radiator assemblies by the radiator armor.