In recent years,Kriging model has gained wide popularity in various fields such as space geology,econometrics,and computer experiments.As a result,research on this model has proliferated.In this paper,the authors prop...In recent years,Kriging model has gained wide popularity in various fields such as space geology,econometrics,and computer experiments.As a result,research on this model has proliferated.In this paper,the authors propose a model averaging estimation based on the best linear unbiased prediction of Kriging model and the leave-one-out cross-validation method,with consideration for the model uncertainty.The authors present a weight selection criterion for the model averaging estimation and provide two theoretical justifications for the proposed method.First,the estimated weight based on the proposed criterion is asymptotically optimal in achieving the lowest possible prediction risk.Second,the proposed method asymptotically assigns all weights to the correctly specified models when the candidate model set includes these models.The effectiveness of the proposed method is verified through numerical analyses.展开更多
In deriving a regression model analysts often have to use variable selection, despite of problems introduced by data- dependent model building. Resampling approaches are proposed to handle some of the critical issues....In deriving a regression model analysts often have to use variable selection, despite of problems introduced by data- dependent model building. Resampling approaches are proposed to handle some of the critical issues. In order to assess and compare several strategies, we will conduct a simulation study with 15 predictors and a complex correlation structure in the linear regression model. Using sample sizes of 100 and 400 and estimates of the residual variance corresponding to R2 of 0.50 and 0.71, we consider 4 scenarios with varying amount of information. We also consider two examples with 24 and 13 predictors, respectively. We will discuss the value of cross-validation, shrinkage and backward elimination (BE) with varying significance level. We will assess whether 2-step approaches using global or parameterwise shrinkage (PWSF) can improve selected models and will compare results to models derived with the LASSO procedure. Beside of MSE we will use model sparsity and further criteria for model assessment. The amount of information in the data has an influence on the selected models and the comparison of the procedures. None of the approaches was best in all scenarios. The performance of backward elimination with a suitably chosen significance level was not worse compared to the LASSO and BE models selected were much sparser, an important advantage for interpretation and transportability. Compared to global shrinkage, PWSF had better performance. Provided that the amount of information is not too small, we conclude that BE followed by PWSF is a suitable approach when variable selection is a key part of data analysis.展开更多
Aviation accidents are currently one of the leading causes of significant injuries and deaths worldwide. This entices researchers to investigate aircraft safety using data analysis approaches based on an advanced mach...Aviation accidents are currently one of the leading causes of significant injuries and deaths worldwide. This entices researchers to investigate aircraft safety using data analysis approaches based on an advanced machine learning algorithm.To assess aviation safety and identify the causes of incidents, a classification model with light gradient boosting machine (LGBM)based on the aviation safety reporting system (ASRS) has been developed. It is improved by k-fold cross-validation with hybrid sampling model (HSCV), which may boost classification performance and maintain data balance. The results show that employing the LGBM-HSCV model can significantly improve accuracy while alleviating data imbalance. Vertical comparison with other cross-validation (CV) methods and lateral comparison with different fold times comprise the comparative approach. Aside from the comparison, two further CV approaches based on the improved method in this study are discussed:one with a different sampling and folding order, and the other with more CV. According to the assessment indices with different methods, the LGBMHSCV model proposed here is effective at detecting incident causes. The improved model for imbalanced data categorization proposed may serve as a point of reference for similar data processing, and the model’s accurate identification of civil aviation incident causes can assist to improve civil aviation safety.展开更多
For the nonparametric regression model Y-ni = g(x(ni)) + epsilon(ni)i = 1, ..., n, with regularly spaced nonrandom design, the authors study the behavior of the nonlinear wavelet estimator of g(x). When the threshold ...For the nonparametric regression model Y-ni = g(x(ni)) + epsilon(ni)i = 1, ..., n, with regularly spaced nonrandom design, the authors study the behavior of the nonlinear wavelet estimator of g(x). When the threshold and truncation parameters are chosen by cross-validation on the everage squared error, strong consistency for the case of dyadic sample size and moment consistency for arbitrary sample size are established under some regular conditions.展开更多
Background Cardiovascular diseases are closely linked to atherosclerotic plaque development and rupture.Plaque progression prediction is of fundamental significance to cardiovascular research and disease diagnosis,pre...Background Cardiovascular diseases are closely linked to atherosclerotic plaque development and rupture.Plaque progression prediction is of fundamental significance to cardiovascular research and disease diagnosis,prevention,and treatment.Generalized linear mixed models(GLMM)is an extension of linear model for categorical responses while considering the correlation among observations.Methods Magnetic resonance image(MRI)data of carotid atheroscleroticplaques were acquired from 20 patients with consent obtained and 3D thin-layer models were constructed to calculate plaque stress and strain for plaque progression prediction.Data for ten morphological and biomechanical risk factors included wall thickness(WT),lipid percent(LP),minimum cap thickness(MinCT),plaque area(PA),plaque burden(PB),lumen area(LA),maximum plaque wall stress(MPWS),maximum plaque wall strain(MPWSn),average plaque wall stress(APWS),and average plaque wall strain(APWSn)were extracted from all slices for analysis.Wall thickness increase(WTI),plaque burden increase(PBI)and plaque area increase(PAI) were chosen as three measures for plaque progression.Generalized linear mixed models(GLMM)with 5-fold cross-validation strategy were used to calculate prediction accuracy for each predictor and identify optimal predictor with the highest prediction accuracy defined as sum of sensitivity and specificity.All 201 MRI slices were randomly divided into 4 training subgroups and 1 verification subgroup.The training subgroups were used for model fitting,and the verification subgroup was used to estimate the model.All combinations(total1023)of 10 risk factors were feed to GLMM and the prediction accuracy of each predictor were selected from the point on the ROC(receiver operating characteristic)curve with the highest sum of specificity and sensitivity.Results LA was the best single predictor for PBI with the highest prediction accuracy(1.360 1),and the area under of the ROC curve(AUC)is0.654 0,followed by APWSn(1.336 3)with AUC=0.6342.The optimal predictor among all possible combinations for PBI was the combination of LA,PA,LP,WT,MPWS and MPWSn with prediction accuracy=1.414 6(AUC=0.715 8).LA was once again the best single predictor for PAI with the highest prediction accuracy(1.184 6)with AUC=0.606 4,followed by MPWSn(1. 183 2)with AUC=0.6084.The combination of PA,PB,WT,MPWS,MPWSn and APWSn gave the best prediction accuracy(1.302 5)for PAI,and the AUC value is 0.6657.PA was the best single predictor for WTI with highest prediction accuracy(1.288 7)with AUC=0.641 5,followed by WT(1.254 0),with AUC=0.6097.The combination of PA,PB,WT,LP,MinCT,MPWS and MPWS was the best predictor for WTI with prediction accuracy as 1.314 0,with AUC=0.6552.This indicated that PBI was a more predictable measure than WTI and PAI. The combinational predictors improved prediction accuracy by 9.95%,4.01%and 1.96%over the best single predictors for PAI,PBI and WTI(AUC values improved by9.78%,9.45%,and 2.14%),respectively.Conclusions The use of GLMM with 5-fold cross-validation strategy combining both morphological and biomechanical risk factors could potentially improve the accuracy of carotid plaque progression prediction.This study suggests that a linear combination of multiple predictors can provide potential improvement to existing plaque assessment schemes.展开更多
Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learni...Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learning(DL)approaches often face several limitations,including inefficient feature extraction,class imbalance,suboptimal classification performance,and limited interpretability,which collectively hinder their deployment in clinical settings.To address these challenges,we propose a novel DL framework for heart disease prediction that integrates a comprehensive preprocessing pipeline with an advanced classification architecture.The preprocessing stage involves label encoding and feature scaling.To address the issue of class imbalance inherent in the personal key indicators of the heart disease dataset,the localized random affine shadowsampling technique is employed,which enhances minority class representation while minimizing overfitting.At the core of the framework lies the Deep Residual Network(DeepResNet),which employs hierarchical residual transformations to facilitate efficient feature extraction and capture complex,non-linear relationships in the data.Experimental results demonstrate that the proposed model significantly outperforms existing techniques,achieving improvements of 3.26%in accuracy,3.16%in area under the receiver operating characteristics,1.09%in recall,and 1.07%in F1-score.Furthermore,robustness is validated using 10-fold crossvalidation,confirming the model’s generalizability across diverse data distributions.Moreover,model interpretability is ensured through the integration of Shapley additive explanations and local interpretable model-agnostic explanations,offering valuable insights into the contribution of individual features to model predictions.Overall,the proposed DL framework presents a robust,interpretable,and clinically applicable solution for heart disease prediction.展开更多
Background: A random multiple-regression model that simultaneously fit all allele substitution effects for additive markers or haplotypes as uncorrelated random effects was proposed for Best Linear Unbiased Predictio...Background: A random multiple-regression model that simultaneously fit all allele substitution effects for additive markers or haplotypes as uncorrelated random effects was proposed for Best Linear Unbiased Prediction, using whole-genome data. Leave-one-out cross validation can be used to quantify the predictive ability of a statistical model.Methods: Naive application of Leave-one-out cross validation is computationally intensive because the training and validation analyses need to be repeated n times, once for each observation. Efficient Leave-one-out cross validation strategies are presented here, requiring little more effort than a single analysis.Results: Efficient Leave-one-out cross validation strategies is 786 times faster than the naive application for a simulated dataset with 1,000 observations and 10,000 markers and 99 times faster with 1,000 observations and 100 markers. These efficiencies relative to the naive approach using the same model will increase with increases in the number of observations.Conclusions: Efficient Leave-one-out cross validation strategies are presented here, requiring little more effort than a single analysis.展开更多
Sustainable forecasting of home energy demand(SFHED)is crucial for promoting energy efficiency,minimizing environmental impact,and optimizing resource allocation.Machine learning(ML)supports SFHED by identifying patte...Sustainable forecasting of home energy demand(SFHED)is crucial for promoting energy efficiency,minimizing environmental impact,and optimizing resource allocation.Machine learning(ML)supports SFHED by identifying patterns and forecasting demand.However,conventional hyperparameter tuning methods often rely solely on minimizing average prediction errors,typically through fixed k-fold cross-validation,which overlooks error variability and limits model robustness.To address this limitation,we propose the Optimized Robust Hyperparameter Tuning for Machine Learning with Enhanced Multi-fold Cross-Validation(ORHT-ML-EMCV)framework.This method integrates statistical analysis of k-fold validation errors by incorporating their mean and variance into the optimization objective,enhancing robustness and generalizability.A weighting factor is introduced to balance accuracy and robustness,and its impact is evaluated across a range of values.A novel Enhanced Multi-Fold Cross-Validation(EMCV)technique is employed to automatically evaluate model performance across varying fold configurations without requiring a predefined k value,thereby reducing sensitivity to data splits.Using three evolutionary algorithms Genetic Algorithm(GA),Particle Swarm Optimization(PSO),and Differential Evolution(DE)we optimize two ensemble models:XGBoost and LightGBM.The optimization process minimizes both mean error and variance,with robustness assessed through cumulative distribution function(CDF)analyses.Experiments on three real-world residential datasets show the proposed method reduces worst-case Root Mean Square Error(RMSE)by up to 19.8%and narrows confidence intervals by up to 25%.Cross-household validations confirm strong generalization,achieving coefficient of determination(R²)of 0.946 and 0.972 on unseen homes.The framework offers a statistically grounded and efficient solution for robust energy forecasting.展开更多
Unlike the detection of marked on-street parking spaces,detecting unmarked spaces poses significant challenges due to the absence of clear physical demarcation and uneven gaps caused by irregular parking.In urban citi...Unlike the detection of marked on-street parking spaces,detecting unmarked spaces poses significant challenges due to the absence of clear physical demarcation and uneven gaps caused by irregular parking.In urban cities with heavy traffic flow,these challenges can result in traffic disruptions,rear-end collisions,sideswipes,and congestion as drivers struggle to make decisions.We propose a real-time detection system for on-street parking spaces using YOLO models and recommend the most suitable space based on KD-tree search.Lightweight versions of YOLOv5,YOLOv7-tiny,and YOLOv8 with different architectures are trained.Among the models,YOLOv5s with SPPF at the backbone achieved an F1-score of 0.89,which was selected for validation using k-fold cross-validation on our dataset.The Low variance and standard deviation recorded across folds indicate the model’s generalizability,reliability,and stability.Inference with KD-tree using predictions from the YOLO models recorded FPS of 37.9 for YOLOv5,67.2 for YOLOv7-tiny,and 67.0 for YOLOv8.The models successfully detect both marked and unmarked empty parking spaces on test data with varying inference speeds and FPS.These models can be efficiently deployed for real-time applications due to their high FPS,inference speed,and lightweight nature.In comparison with other state-of-the-art models,our models outperform them,further demonstrating their effectiveness.展开更多
Jacket platforms constitute the foundational infrastructure of offshore oil and gas field exploitation.How to efficiently and accurately monitor the mechanical properties of jacket structures is one of the key problem...Jacket platforms constitute the foundational infrastructure of offshore oil and gas field exploitation.How to efficiently and accurately monitor the mechanical properties of jacket structures is one of the key problems to be solved to ensure the safe operation of the platform.To address the practical engineering problem that it is difficult to monitor the stress response of the tubular joints of jacket platforms online,a digital twin reduced-order method for real-time prediction of the stress response of tubular joints is proposed.In the offline construction phase,multi-scale modeling and multi-parameter experimental design methods are used to obtain the stress response data set of the jacket structure.Proper orthogonal decomposition is employed to extract the main feature information from the snapshot matrix,resulting in a reduced-order basis.The leave-one-out cross-validation method is used to select the optimal modal order for constructing the reduced-order model(ROM).In the online prediction phase,a digital twin model of the tubular joint is established,and the prediction performance of the ROM is analyzed and verified through using random environmental load and field environmental monitoring data.The results indicate that,compared with traditional numerical simulations of tubular joints,the ROM based on the proposed reduced-order method is more efficient in predicting the stress response of tubular joints while ensuring accuracy and robustness.展开更多
Spartina alterniflora is now listed among the world’s 100 most dangerous invasive species,severely affecting the ecological balance of coastal wetlands.Remote sensing technologies based on deep learning enable large-...Spartina alterniflora is now listed among the world’s 100 most dangerous invasive species,severely affecting the ecological balance of coastal wetlands.Remote sensing technologies based on deep learning enable large-scale monitoring of Spartina alterniflora,but they require large datasets and have poor interpretability.A new method is proposed to detect Spartina alterniflora from Sentinel-2 imagery.Firstly,to get the high canopy cover and dense community characteristics of Spartina alterniflora,multi-dimensional shallow features are extracted from the imagery.Secondly,to detect different objects from satellite imagery,index features are extracted,and the statistical features of the Gray-Level Co-occurrence Matrix(GLCM)are derived using principal component analysis.Then,ensemble learning methods,including random forest,extreme gradient boosting,and light gradient boosting machine models,are employed for image classification.Meanwhile,Recursive Feature Elimination with Cross-Validation(RFECV)is used to select the best feature subset.Finally,to enhance the interpretability of the models,the best features are utilized to classify multi-temporal images and SHapley Additive exPlanations(SHAP)is combined with these classifications to explain the model prediction process.The method is validated by using Sentinel-2 imageries and previous observations of Spartina alterniflora in Chongming Island,it is found that the model combining image texture features such as GLCM covariance can significantly improve the detection accuracy of Spartina alterniflora by about 8%compared with the model without image texture features.Through multiple model comparisons and feature selection via RFECV,the selected model and eight features demonstrated good classification accuracy when applied to data from different time periods,proving that feature reduction can effectively enhance model generalization.Additionally,visualizing model decisions using SHAP revealed that the image texture feature component_1_GLCMVariance is particularly important for identifying each land cover type.展开更多
Prediction of the biodegradability of organic pollutants is an ecologically desirable and economically feasible tool for estimating the environmental fate of chemicals. In this paper,stepwise multiple linear regressio...Prediction of the biodegradability of organic pollutants is an ecologically desirable and economically feasible tool for estimating the environmental fate of chemicals. In this paper,stepwise multiple linear regression analysis method was applied to establish quantitative structure biodegradability relationship(QSBR) between the chemical structure and a novel biodegradation activity index(qmax) of 20 polycyclic aromatic hydrocarbons(PAHs). The frequency B3LYP/6-311+G(2df,p) calculations showed no imaginary values, implying that all the structures are minima on the potential energy surface. After eliminating the parameters which had low related coefficient with qmax, the major descriptors influencing the biodegradation activity were screened to be Freq, D, MR, EHOMOand To IE. The evaluation of the developed QSBR mode, using a leave-one-out cross-validation procedure, showed that the relationships are significant and the model had good robustness and predictive ability. The results would be helpful for understanding the mechanisms governing biodegradation at the molecular level.展开更多
基金supported by the National Natural Science Foundation of China under Grant Nos.71973116 and 12201018the Postdoctoral Project in China under Grant No.2022M720336+2 种基金the National Natural Science Foundation of China under Grant Nos.12071457 and 11971045the Beijing Natural Science Foundation under Grant No.1222002the NQI Project under Grant No.2022YFF0609903。
文摘In recent years,Kriging model has gained wide popularity in various fields such as space geology,econometrics,and computer experiments.As a result,research on this model has proliferated.In this paper,the authors propose a model averaging estimation based on the best linear unbiased prediction of Kriging model and the leave-one-out cross-validation method,with consideration for the model uncertainty.The authors present a weight selection criterion for the model averaging estimation and provide two theoretical justifications for the proposed method.First,the estimated weight based on the proposed criterion is asymptotically optimal in achieving the lowest possible prediction risk.Second,the proposed method asymptotically assigns all weights to the correctly specified models when the candidate model set includes these models.The effectiveness of the proposed method is verified through numerical analyses.
文摘In deriving a regression model analysts often have to use variable selection, despite of problems introduced by data- dependent model building. Resampling approaches are proposed to handle some of the critical issues. In order to assess and compare several strategies, we will conduct a simulation study with 15 predictors and a complex correlation structure in the linear regression model. Using sample sizes of 100 and 400 and estimates of the residual variance corresponding to R2 of 0.50 and 0.71, we consider 4 scenarios with varying amount of information. We also consider two examples with 24 and 13 predictors, respectively. We will discuss the value of cross-validation, shrinkage and backward elimination (BE) with varying significance level. We will assess whether 2-step approaches using global or parameterwise shrinkage (PWSF) can improve selected models and will compare results to models derived with the LASSO procedure. Beside of MSE we will use model sparsity and further criteria for model assessment. The amount of information in the data has an influence on the selected models and the comparison of the procedures. None of the approaches was best in all scenarios. The performance of backward elimination with a suitably chosen significance level was not worse compared to the LASSO and BE models selected were much sparser, an important advantage for interpretation and transportability. Compared to global shrinkage, PWSF had better performance. Provided that the amount of information is not too small, we conclude that BE followed by PWSF is a suitable approach when variable selection is a key part of data analysis.
基金supported by the National Natural Science Foundation of China Civil Aviation Joint Fund (U1833110)Research on the Dual Prevention Mechanism and Intelligent Management Technology f or Civil Aviation Safety Risks (YK23-03-05)。
文摘Aviation accidents are currently one of the leading causes of significant injuries and deaths worldwide. This entices researchers to investigate aircraft safety using data analysis approaches based on an advanced machine learning algorithm.To assess aviation safety and identify the causes of incidents, a classification model with light gradient boosting machine (LGBM)based on the aviation safety reporting system (ASRS) has been developed. It is improved by k-fold cross-validation with hybrid sampling model (HSCV), which may boost classification performance and maintain data balance. The results show that employing the LGBM-HSCV model can significantly improve accuracy while alleviating data imbalance. Vertical comparison with other cross-validation (CV) methods and lateral comparison with different fold times comprise the comparative approach. Aside from the comparison, two further CV approaches based on the improved method in this study are discussed:one with a different sampling and folding order, and the other with more CV. According to the assessment indices with different methods, the LGBMHSCV model proposed here is effective at detecting incident causes. The improved model for imbalanced data categorization proposed may serve as a point of reference for similar data processing, and the model’s accurate identification of civil aviation incident causes can assist to improve civil aviation safety.
文摘For the nonparametric regression model Y-ni = g(x(ni)) + epsilon(ni)i = 1, ..., n, with regularly spaced nonrandom design, the authors study the behavior of the nonlinear wavelet estimator of g(x). When the threshold and truncation parameters are chosen by cross-validation on the everage squared error, strong consistency for the case of dyadic sample size and moment consistency for arbitrary sample size are established under some regular conditions.
基金supported in part by National Sciences Foundation of China grant ( 11672001)Jiangsu Province Science and Technology Agency grant ( BE2016785)supported in part by Postgraduate Research & Practice Innovation Program of Jiangsu Province grant ( KYCX18_0156)
文摘Background Cardiovascular diseases are closely linked to atherosclerotic plaque development and rupture.Plaque progression prediction is of fundamental significance to cardiovascular research and disease diagnosis,prevention,and treatment.Generalized linear mixed models(GLMM)is an extension of linear model for categorical responses while considering the correlation among observations.Methods Magnetic resonance image(MRI)data of carotid atheroscleroticplaques were acquired from 20 patients with consent obtained and 3D thin-layer models were constructed to calculate plaque stress and strain for plaque progression prediction.Data for ten morphological and biomechanical risk factors included wall thickness(WT),lipid percent(LP),minimum cap thickness(MinCT),plaque area(PA),plaque burden(PB),lumen area(LA),maximum plaque wall stress(MPWS),maximum plaque wall strain(MPWSn),average plaque wall stress(APWS),and average plaque wall strain(APWSn)were extracted from all slices for analysis.Wall thickness increase(WTI),plaque burden increase(PBI)and plaque area increase(PAI) were chosen as three measures for plaque progression.Generalized linear mixed models(GLMM)with 5-fold cross-validation strategy were used to calculate prediction accuracy for each predictor and identify optimal predictor with the highest prediction accuracy defined as sum of sensitivity and specificity.All 201 MRI slices were randomly divided into 4 training subgroups and 1 verification subgroup.The training subgroups were used for model fitting,and the verification subgroup was used to estimate the model.All combinations(total1023)of 10 risk factors were feed to GLMM and the prediction accuracy of each predictor were selected from the point on the ROC(receiver operating characteristic)curve with the highest sum of specificity and sensitivity.Results LA was the best single predictor for PBI with the highest prediction accuracy(1.360 1),and the area under of the ROC curve(AUC)is0.654 0,followed by APWSn(1.336 3)with AUC=0.6342.The optimal predictor among all possible combinations for PBI was the combination of LA,PA,LP,WT,MPWS and MPWSn with prediction accuracy=1.414 6(AUC=0.715 8).LA was once again the best single predictor for PAI with the highest prediction accuracy(1.184 6)with AUC=0.606 4,followed by MPWSn(1. 183 2)with AUC=0.6084.The combination of PA,PB,WT,MPWS,MPWSn and APWSn gave the best prediction accuracy(1.302 5)for PAI,and the AUC value is 0.6657.PA was the best single predictor for WTI with highest prediction accuracy(1.288 7)with AUC=0.641 5,followed by WT(1.254 0),with AUC=0.6097.The combination of PA,PB,WT,LP,MinCT,MPWS and MPWS was the best predictor for WTI with prediction accuracy as 1.314 0,with AUC=0.6552.This indicated that PBI was a more predictable measure than WTI and PAI. The combinational predictors improved prediction accuracy by 9.95%,4.01%and 1.96%over the best single predictors for PAI,PBI and WTI(AUC values improved by9.78%,9.45%,and 2.14%),respectively.Conclusions The use of GLMM with 5-fold cross-validation strategy combining both morphological and biomechanical risk factors could potentially improve the accuracy of carotid plaque progression prediction.This study suggests that a linear combination of multiple predictors can provide potential improvement to existing plaque assessment schemes.
基金funded by Ongoing Research Funding Program for Project number(ORF-2025-648),King Saud University,Riyadh,Saudi Arabia.
文摘Heart disease remains a leading cause of mortality worldwide,emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention.However,existing Deep Learning(DL)approaches often face several limitations,including inefficient feature extraction,class imbalance,suboptimal classification performance,and limited interpretability,which collectively hinder their deployment in clinical settings.To address these challenges,we propose a novel DL framework for heart disease prediction that integrates a comprehensive preprocessing pipeline with an advanced classification architecture.The preprocessing stage involves label encoding and feature scaling.To address the issue of class imbalance inherent in the personal key indicators of the heart disease dataset,the localized random affine shadowsampling technique is employed,which enhances minority class representation while minimizing overfitting.At the core of the framework lies the Deep Residual Network(DeepResNet),which employs hierarchical residual transformations to facilitate efficient feature extraction and capture complex,non-linear relationships in the data.Experimental results demonstrate that the proposed model significantly outperforms existing techniques,achieving improvements of 3.26%in accuracy,3.16%in area under the receiver operating characteristics,1.09%in recall,and 1.07%in F1-score.Furthermore,robustness is validated using 10-fold crossvalidation,confirming the model’s generalizability across diverse data distributions.Moreover,model interpretability is ensured through the integration of Shapley additive explanations and local interpretable model-agnostic explanations,offering valuable insights into the contribution of individual features to model predictions.Overall,the proposed DL framework presents a robust,interpretable,and clinically applicable solution for heart disease prediction.
基金supported by the US Department of Agriculture,Agriculture and Food Research Initiative National Institute of Food and Agriculture Competitive grant no.2015-67015-22947
文摘Background: A random multiple-regression model that simultaneously fit all allele substitution effects for additive markers or haplotypes as uncorrelated random effects was proposed for Best Linear Unbiased Prediction, using whole-genome data. Leave-one-out cross validation can be used to quantify the predictive ability of a statistical model.Methods: Naive application of Leave-one-out cross validation is computationally intensive because the training and validation analyses need to be repeated n times, once for each observation. Efficient Leave-one-out cross validation strategies are presented here, requiring little more effort than a single analysis.Results: Efficient Leave-one-out cross validation strategies is 786 times faster than the naive application for a simulated dataset with 1,000 observations and 10,000 markers and 99 times faster with 1,000 observations and 100 markers. These efficiencies relative to the naive approach using the same model will increase with increases in the number of observations.Conclusions: Efficient Leave-one-out cross validation strategies are presented here, requiring little more effort than a single analysis.
文摘Sustainable forecasting of home energy demand(SFHED)is crucial for promoting energy efficiency,minimizing environmental impact,and optimizing resource allocation.Machine learning(ML)supports SFHED by identifying patterns and forecasting demand.However,conventional hyperparameter tuning methods often rely solely on minimizing average prediction errors,typically through fixed k-fold cross-validation,which overlooks error variability and limits model robustness.To address this limitation,we propose the Optimized Robust Hyperparameter Tuning for Machine Learning with Enhanced Multi-fold Cross-Validation(ORHT-ML-EMCV)framework.This method integrates statistical analysis of k-fold validation errors by incorporating their mean and variance into the optimization objective,enhancing robustness and generalizability.A weighting factor is introduced to balance accuracy and robustness,and its impact is evaluated across a range of values.A novel Enhanced Multi-Fold Cross-Validation(EMCV)technique is employed to automatically evaluate model performance across varying fold configurations without requiring a predefined k value,thereby reducing sensitivity to data splits.Using three evolutionary algorithms Genetic Algorithm(GA),Particle Swarm Optimization(PSO),and Differential Evolution(DE)we optimize two ensemble models:XGBoost and LightGBM.The optimization process minimizes both mean error and variance,with robustness assessed through cumulative distribution function(CDF)analyses.Experiments on three real-world residential datasets show the proposed method reduces worst-case Root Mean Square Error(RMSE)by up to 19.8%and narrows confidence intervals by up to 25%.Cross-household validations confirm strong generalization,achieving coefficient of determination(R²)of 0.946 and 0.972 on unseen homes.The framework offers a statistically grounded and efficient solution for robust energy forecasting.
基金supports this paper.Project Nos.NSTC-112-2221-E-324-003 MY3,NSTC-111-2622-E-324-002 and NSTC-112-2221-E-324-011-MY2.
文摘Unlike the detection of marked on-street parking spaces,detecting unmarked spaces poses significant challenges due to the absence of clear physical demarcation and uneven gaps caused by irregular parking.In urban cities with heavy traffic flow,these challenges can result in traffic disruptions,rear-end collisions,sideswipes,and congestion as drivers struggle to make decisions.We propose a real-time detection system for on-street parking spaces using YOLO models and recommend the most suitable space based on KD-tree search.Lightweight versions of YOLOv5,YOLOv7-tiny,and YOLOv8 with different architectures are trained.Among the models,YOLOv5s with SPPF at the backbone achieved an F1-score of 0.89,which was selected for validation using k-fold cross-validation on our dataset.The Low variance and standard deviation recorded across folds indicate the model’s generalizability,reliability,and stability.Inference with KD-tree using predictions from the YOLO models recorded FPS of 37.9 for YOLOv5,67.2 for YOLOv7-tiny,and 67.0 for YOLOv8.The models successfully detect both marked and unmarked empty parking spaces on test data with varying inference speeds and FPS.These models can be efficiently deployed for real-time applications due to their high FPS,inference speed,and lightweight nature.In comparison with other state-of-the-art models,our models outperform them,further demonstrating their effectiveness.
基金financially supported by the National Natural Science Foundation of China(Grant No.11472076).
文摘Jacket platforms constitute the foundational infrastructure of offshore oil and gas field exploitation.How to efficiently and accurately monitor the mechanical properties of jacket structures is one of the key problems to be solved to ensure the safe operation of the platform.To address the practical engineering problem that it is difficult to monitor the stress response of the tubular joints of jacket platforms online,a digital twin reduced-order method for real-time prediction of the stress response of tubular joints is proposed.In the offline construction phase,multi-scale modeling and multi-parameter experimental design methods are used to obtain the stress response data set of the jacket structure.Proper orthogonal decomposition is employed to extract the main feature information from the snapshot matrix,resulting in a reduced-order basis.The leave-one-out cross-validation method is used to select the optimal modal order for constructing the reduced-order model(ROM).In the online prediction phase,a digital twin model of the tubular joint is established,and the prediction performance of the ROM is analyzed and verified through using random environmental load and field environmental monitoring data.The results indicate that,compared with traditional numerical simulations of tubular joints,the ROM based on the proposed reduced-order method is more efficient in predicting the stress response of tubular joints while ensuring accuracy and robustness.
基金The National Key Research and Development Program of China under contract No.2023YFC3008204the National Natural Science Foundation of China under contract Nos 41977302 and 42476217.
文摘Spartina alterniflora is now listed among the world’s 100 most dangerous invasive species,severely affecting the ecological balance of coastal wetlands.Remote sensing technologies based on deep learning enable large-scale monitoring of Spartina alterniflora,but they require large datasets and have poor interpretability.A new method is proposed to detect Spartina alterniflora from Sentinel-2 imagery.Firstly,to get the high canopy cover and dense community characteristics of Spartina alterniflora,multi-dimensional shallow features are extracted from the imagery.Secondly,to detect different objects from satellite imagery,index features are extracted,and the statistical features of the Gray-Level Co-occurrence Matrix(GLCM)are derived using principal component analysis.Then,ensemble learning methods,including random forest,extreme gradient boosting,and light gradient boosting machine models,are employed for image classification.Meanwhile,Recursive Feature Elimination with Cross-Validation(RFECV)is used to select the best feature subset.Finally,to enhance the interpretability of the models,the best features are utilized to classify multi-temporal images and SHapley Additive exPlanations(SHAP)is combined with these classifications to explain the model prediction process.The method is validated by using Sentinel-2 imageries and previous observations of Spartina alterniflora in Chongming Island,it is found that the model combining image texture features such as GLCM covariance can significantly improve the detection accuracy of Spartina alterniflora by about 8%compared with the model without image texture features.Through multiple model comparisons and feature selection via RFECV,the selected model and eight features demonstrated good classification accuracy when applied to data from different time periods,proving that feature reduction can effectively enhance model generalization.Additionally,visualizing model decisions using SHAP revealed that the image texture feature component_1_GLCMVariance is particularly important for identifying each land cover type.
基金supported by the State Key Laboratory of Urban Water Resource and Environment, Harbin Institute of Technology (No. 2013DX10)the Sino-Dutch Research Program (No. zhmhgfs2011-001)the Sino-American Coal Chemical Industry Program (No. ZMAGZ 2011001)
文摘Prediction of the biodegradability of organic pollutants is an ecologically desirable and economically feasible tool for estimating the environmental fate of chemicals. In this paper,stepwise multiple linear regression analysis method was applied to establish quantitative structure biodegradability relationship(QSBR) between the chemical structure and a novel biodegradation activity index(qmax) of 20 polycyclic aromatic hydrocarbons(PAHs). The frequency B3LYP/6-311+G(2df,p) calculations showed no imaginary values, implying that all the structures are minima on the potential energy surface. After eliminating the parameters which had low related coefficient with qmax, the major descriptors influencing the biodegradation activity were screened to be Freq, D, MR, EHOMOand To IE. The evaluation of the developed QSBR mode, using a leave-one-out cross-validation procedure, showed that the relationships are significant and the model had good robustness and predictive ability. The results would be helpful for understanding the mechanisms governing biodegradation at the molecular level.