Aviation accidents are currently one of the leading causes of significant injuries and deaths worldwide. This entices researchers to investigate aircraft safety using data analysis approaches based on an advanced mach...Aviation accidents are currently one of the leading causes of significant injuries and deaths worldwide. This entices researchers to investigate aircraft safety using data analysis approaches based on an advanced machine learning algorithm.To assess aviation safety and identify the causes of incidents, a classification model with light gradient boosting machine (LGBM)based on the aviation safety reporting system (ASRS) has been developed. It is improved by k-fold cross-validation with hybrid sampling model (HSCV), which may boost classification performance and maintain data balance. The results show that employing the LGBM-HSCV model can significantly improve accuracy while alleviating data imbalance. Vertical comparison with other cross-validation (CV) methods and lateral comparison with different fold times comprise the comparative approach. Aside from the comparison, two further CV approaches based on the improved method in this study are discussed:one with a different sampling and folding order, and the other with more CV. According to the assessment indices with different methods, the LGBMHSCV model proposed here is effective at detecting incident causes. The improved model for imbalanced data categorization proposed may serve as a point of reference for similar data processing, and the model’s accurate identification of civil aviation incident causes can assist to improve civil aviation safety.展开更多
Existing blockwise empirical likelihood(BEL)method blocks the observations or their analogues,which is proven useful under some dependent data settings.In this paper,we introduce a new BEL(NBEL)method by blocking the ...Existing blockwise empirical likelihood(BEL)method blocks the observations or their analogues,which is proven useful under some dependent data settings.In this paper,we introduce a new BEL(NBEL)method by blocking the scoring functions under high dimensional cases.We study the construction of confidence regions for the parameters in spatial autoregressive models with spatial autoregressive disturbances(SARAR models)with high dimension of parameters by using the NBEL method.It is shown that the NBEL ratio statistics are asymptoticallyχ^(2)-type distributed,which are used to obtain the NBEL based confidence regions for the parameters in SARAR models.A simulation study is conducted to compare the performances of the NBEL and the usual EL methods.展开更多
Over the past few decades, numerous adaptive Kalman filters(AKFs) have been proposed. However, achieving online estimation with both high estimation accuracy and fast convergence speed is challenging, especially when ...Over the past few decades, numerous adaptive Kalman filters(AKFs) have been proposed. However, achieving online estimation with both high estimation accuracy and fast convergence speed is challenging, especially when both the process noise and measurement noise covariance matrices are relatively inaccurate. Maximum likelihood estimation(MLE) possesses the potential to achieve this goal, since its theoretical accuracy is guaranteed by asymptotic optimality and the convergence speed is fast due to weak dependence on accurate state estimation.Unfortunately, the maximum likelihood cost function is so intricate that the existing MLE methods can only simply ignore all historical measurement information to achieve online estimation,which cannot adequately realize the potential of MLE. In order to design online MLE-based AKFs with high estimation accuracy and fast convergence speed, an online exploratory MLE approach is proposed, based on which a mini-batch coordinate descent noise covariance matrix estimation framework is developed. In this framework, the maximum likelihood cost function is simplified for online estimation with fewer and simpler terms which are selected in a mini-batch and calculated with a backtracking method. This maximum likelihood cost function is sidestepped and solved by exploring possible estimated noise covariance matrices adaptively while the historical measurement information is adequately utilized. Furthermore, four specific algorithms are derived under this framework to meet different practical requirements in terms of convergence speed, estimation accuracy,and calculation load. Abundant simulations and experiments are carried out to verify the validity and superiority of the proposed algorithms as compared with existing state-of-the-art AKFs.展开更多
BACKGROUND Attention deficit hyperactivity disorder(ADHD)is a prevalent neurodevelopmental disorder in adolescents characterized by inattention,hyperactivity,and impulsivity,which impact cognitive,behavioral,and emoti...BACKGROUND Attention deficit hyperactivity disorder(ADHD)is a prevalent neurodevelopmental disorder in adolescents characterized by inattention,hyperactivity,and impulsivity,which impact cognitive,behavioral,and emotional functioning.Resting-state functional magnetic resonance imaging(rs-fMRI)provides critical insights into the functional architecture of the brain in ADHD.Despite extensive research,specific brain regions consistently affected in ADHD patients during these formative years have not been comprehensively delineated.AIM To identify consistent vulnerable brain regions in adolescent ADHD patients using rs-fMRI and activation likelihood estimation(ALE)meta-analysis.METHODS We conducted a comprehensive literature search up to August 31,2024,to identify studies investigating functional brain alterations in adolescents with ADHD.We utilized regional homogeneity(ReHo),amplitude of low-frequency fluctuations(ALFF),dynamic ALFF(dALFF)and fractional ALFF(fALFF)analyses.We compared the regions of aberrant spontaneous neural activity in adolescents with ADHD with those in healthy controls(HCs)using ALE.RESULTS Fifteen studies(468 adolescent ADHD patients and 466 HCs)were included.Combining the ReHo and ALFF/fALFF/dALFF data,the results revealed increased activity in the right lingual gyrus[LING,Brodmann Area(BA)18],left LING(BA 18),and right cuneus(CUN,BA 23)in adolescent ADHD patients compared with HCs(voxel size:592-32 mm³,P<0.05).Decreased activity was observed in the left medial frontal gyrus(MFG,BA 9)and left precuneus(PCUN,BA 31)in adolescent ADHD patients compared with HCs(voxel size:960-456 mm³,P<0.05).Jackknife sensitivity analyses demonstrated robust reproducibility in 11 of the 13 tests for the right LING,left LING,and right CUN and in 11 of the 14 tests for the left MFG and left PCUN.CONCLUSION We identified specific brain regions with both increased and decreased activity in adolescent ADHD patients,enhancing our understanding of the neural alterations that occur during this pivotal stage of development.展开更多
In deriving a regression model analysts often have to use variable selection, despite of problems introduced by data- dependent model building. Resampling approaches are proposed to handle some of the critical issues....In deriving a regression model analysts often have to use variable selection, despite of problems introduced by data- dependent model building. Resampling approaches are proposed to handle some of the critical issues. In order to assess and compare several strategies, we will conduct a simulation study with 15 predictors and a complex correlation structure in the linear regression model. Using sample sizes of 100 and 400 and estimates of the residual variance corresponding to R2 of 0.50 and 0.71, we consider 4 scenarios with varying amount of information. We also consider two examples with 24 and 13 predictors, respectively. We will discuss the value of cross-validation, shrinkage and backward elimination (BE) with varying significance level. We will assess whether 2-step approaches using global or parameterwise shrinkage (PWSF) can improve selected models and will compare results to models derived with the LASSO procedure. Beside of MSE we will use model sparsity and further criteria for model assessment. The amount of information in the data has an influence on the selected models and the comparison of the procedures. None of the approaches was best in all scenarios. The performance of backward elimination with a suitably chosen significance level was not worse compared to the LASSO and BE models selected were much sparser, an important advantage for interpretation and transportability. Compared to global shrinkage, PWSF had better performance. Provided that the amount of information is not too small, we conclude that BE followed by PWSF is a suitable approach when variable selection is a key part of data analysis.展开更多
For the nonparametric regression model Y-ni = g(x(ni)) + epsilon(ni)i = 1, ..., n, with regularly spaced nonrandom design, the authors study the behavior of the nonlinear wavelet estimator of g(x). When the threshold ...For the nonparametric regression model Y-ni = g(x(ni)) + epsilon(ni)i = 1, ..., n, with regularly spaced nonrandom design, the authors study the behavior of the nonlinear wavelet estimator of g(x). When the threshold and truncation parameters are chosen by cross-validation on the everage squared error, strong consistency for the case of dyadic sample size and moment consistency for arbitrary sample size are established under some regular conditions.展开更多
Background Cardiovascular diseases are closely linked to atherosclerotic plaque development and rupture.Plaque progression prediction is of fundamental significance to cardiovascular research and disease diagnosis,pre...Background Cardiovascular diseases are closely linked to atherosclerotic plaque development and rupture.Plaque progression prediction is of fundamental significance to cardiovascular research and disease diagnosis,prevention,and treatment.Generalized linear mixed models(GLMM)is an extension of linear model for categorical responses while considering the correlation among observations.Methods Magnetic resonance image(MRI)data of carotid atheroscleroticplaques were acquired from 20 patients with consent obtained and 3D thin-layer models were constructed to calculate plaque stress and strain for plaque progression prediction.Data for ten morphological and biomechanical risk factors included wall thickness(WT),lipid percent(LP),minimum cap thickness(MinCT),plaque area(PA),plaque burden(PB),lumen area(LA),maximum plaque wall stress(MPWS),maximum plaque wall strain(MPWSn),average plaque wall stress(APWS),and average plaque wall strain(APWSn)were extracted from all slices for analysis.Wall thickness increase(WTI),plaque burden increase(PBI)and plaque area increase(PAI) were chosen as three measures for plaque progression.Generalized linear mixed models(GLMM)with 5-fold cross-validation strategy were used to calculate prediction accuracy for each predictor and identify optimal predictor with the highest prediction accuracy defined as sum of sensitivity and specificity.All 201 MRI slices were randomly divided into 4 training subgroups and 1 verification subgroup.The training subgroups were used for model fitting,and the verification subgroup was used to estimate the model.All combinations(total1023)of 10 risk factors were feed to GLMM and the prediction accuracy of each predictor were selected from the point on the ROC(receiver operating characteristic)curve with the highest sum of specificity and sensitivity.Results LA was the best single predictor for PBI with the highest prediction accuracy(1.360 1),and the area under of the ROC curve(AUC)is0.654 0,followed by APWSn(1.336 3)with AUC=0.6342.The optimal predictor among all possible combinations for PBI was the combination of LA,PA,LP,WT,MPWS and MPWSn with prediction accuracy=1.414 6(AUC=0.715 8).LA was once again the best single predictor for PAI with the highest prediction accuracy(1.184 6)with AUC=0.606 4,followed by MPWSn(1. 183 2)with AUC=0.6084.The combination of PA,PB,WT,MPWS,MPWSn and APWSn gave the best prediction accuracy(1.302 5)for PAI,and the AUC value is 0.6657.PA was the best single predictor for WTI with highest prediction accuracy(1.288 7)with AUC=0.641 5,followed by WT(1.254 0),with AUC=0.6097.The combination of PA,PB,WT,LP,MinCT,MPWS and MPWS was the best predictor for WTI with prediction accuracy as 1.314 0,with AUC=0.6552.This indicated that PBI was a more predictable measure than WTI and PAI. The combinational predictors improved prediction accuracy by 9.95%,4.01%and 1.96%over the best single predictors for PAI,PBI and WTI(AUC values improved by9.78%,9.45%,and 2.14%),respectively.Conclusions The use of GLMM with 5-fold cross-validation strategy combining both morphological and biomechanical risk factors could potentially improve the accuracy of carotid plaque progression prediction.This study suggests that a linear combination of multiple predictors can provide potential improvement to existing plaque assessment schemes.展开更多
In this paper,we study spatial cross-sectional data models in the form of matrix exponential spatial specification(MESS),where MESS appears in both dependent and error terms.The empirical likelihood(EL)ratio statistic...In this paper,we study spatial cross-sectional data models in the form of matrix exponential spatial specification(MESS),where MESS appears in both dependent and error terms.The empirical likelihood(EL)ratio statistics are established for the parameters of the MESS model.It is shown that the limiting distributions of EL ratio statistics follow chi-square distributions,which are used to construct the confidence regions of model parameters.Simulation experiments are conducted to compare the performances of confidence regions based on EL method and normal approximation method.展开更多
Beamspace super-resolution methods for elevation estimation in multipath environment has attracted significant attention, especially the beamspace maximum likelihood(BML)algorithm. However, the difference beam is rare...Beamspace super-resolution methods for elevation estimation in multipath environment has attracted significant attention, especially the beamspace maximum likelihood(BML)algorithm. However, the difference beam is rarely used in superresolution methods, especially in low elevation estimation. The target airspace information in the difference beam is different from the target airspace information in the sum beam. And the use of difference beams does not significantly increase the complexity of the system and algorithms. Thus, this paper applies the difference beam to the beamformer to improve the elevation estimation performance of BML algorithm. And the direction and number of beams can be adjusted according to the actual needs. The theoretical target elevation angle root means square error(RMSE) and the computational complexity of the proposed algorithms are analyzed. Finally, computer simulations and real data processing results demonstrate the effectiveness of the proposed algorithms.展开更多
Count data is almost always over-dispersed where the variance exceeds the mean. Several count data models have been proposed by researchers but the problem of over-dispersion still remains unresolved, more so in the c...Count data is almost always over-dispersed where the variance exceeds the mean. Several count data models have been proposed by researchers but the problem of over-dispersion still remains unresolved, more so in the context of change point analysis. This study develops a likelihood-based algorithm that detects and estimates multiple change points in a set of count data assumed to follow the Negative Binomial distribution. Discrete change point procedures discussed in literature work well for equi-dispersed data. The new algorithm produces reliable estimates of change points in cases of both equi-dispersed and over-dispersed count data;hence its advantage over other count data change point techniques. The Negative Binomial Multiple Change Point Algorithm was tested using simulated data for different sample sizes and varying positions of change. Changes in the distribution parameters were detected and estimated by conducting a likelihood ratio test on several partitions of data obtained through step-wise recursive binary segmentation. Critical values for the likelihood ratio test were developed and used to check for significance of the maximum likelihood estimates of the change points. The change point algorithm was found to work best for large datasets, though it also works well for small and medium-sized datasets with little to no error in the location of change points. The algorithm correctly detects changes when present and fails to detect changes when change is absent in actual sense. Power analysis of the likelihood ratio test for change was performed through Monte-Carlo simulation in the single change point setting. Sensitivity analysis of the test power showed that likelihood ratio test is the most powerful when the simulated change points are located mid-way through the sample data as opposed to when changes were located in the periphery. Further, the test is more powerful when the change was located three-quarter-way through the sample data compared to when the change point is closer (quarter-way) to the first observation.展开更多
BACKGROUND Adolescent major depressive disorder(MDD)is a significant mental health concern that often leads to recurrent depression in adulthood.Resting-state functional magnetic resonance imaging(rs-fMRI)offers uniqu...BACKGROUND Adolescent major depressive disorder(MDD)is a significant mental health concern that often leads to recurrent depression in adulthood.Resting-state functional magnetic resonance imaging(rs-fMRI)offers unique insights into the neural mechanisms underlying this condition.However,despite previous research,the specific vulnerable brain regions affected in adolescent MDD patients have not been fully elucidated.AIM To identify consistent vulnerable brain regions in adolescent MDD patients using rs-fMRI and activation likelihood estimation(ALE)meta-analysis.METHODS We performed a comprehensive literature search through July 12,2023,for studies investigating brain functional changes in adolescent MDD patients.We utilized regional homogeneity(ReHo),amplitude of low-frequency fluctuations(ALFF)and fractional ALFF(fALFF)analyses.We compared the regions of aberrant spontaneous neural activity in adolescents with MDD vs healthy controls(HCs)using ALE.RESULTS Ten studies(369 adolescent MDD patients and 313 HCs)were included.Combining the ReHo and ALFF/fALFF data,the results revealed that the activity in the right cuneus and left precuneus was lower in the adolescent MDD patients than in the HCs(voxel size:648 mm3,P<0.05),and no brain region exhibited increased activity.Based on the ALFF data,we found decreased activity in the right cuneus and left precuneus in adolescent MDD patients(voxel size:736 mm3,P<0.05),with no regions exhibiting increased activity.CONCLUSION Through ALE meta-analysis,we consistently identified the right cuneus and left precuneus as vulnerable brain regions in adolescent MDD patients,increasing our understanding of the neuropathology of affected adolescents.展开更多
BACKGROUND Major depressive disorder(MDD)in adolescents and young adults contributes significantly to global morbidity,with inconsistent findings on brain structural changes from structural magnetic resonance imaging ...BACKGROUND Major depressive disorder(MDD)in adolescents and young adults contributes significantly to global morbidity,with inconsistent findings on brain structural changes from structural magnetic resonance imaging studies.Activation likeli-hood estimation(ALE)offers a method to synthesize these diverse findings and identify consistent brain anomalies.METHODS We performed a comprehensive literature search in PubMed,Web of Science,Embase,and Chinese National Knowledge Infrastructure databases for neuroi-maging studies on MDD among adolescents and young adults published up to November 19,2023.Two independent researchers performed the study selection,quality assessment,and data extraction.The ALE technique was employed to synthesize findings on localized brain function anomalies in MDD patients,which was supplemented by sensitivity analyses.RESULTS Twenty-two studies comprising fourteen diffusion tensor imaging(DTI)studies and eight voxel-based morphome-try(VBM)studies,and involving 451 MDD patients and 465 healthy controls(HCs)for DTI and 664 MDD patients and 946 HCs for VBM,were included.DTI-based ALE demonstrated significant reductions in fractional anisotropy(FA)values in the right caudate head,right insula,and right lentiform nucleus putamen in adolescents and young adults with MDD compared to HCs,with no regions exhibiting increased FA values.VBM-based ALE did not demonstrate significant alterations in gray matter volume.Sensitivity analyses highlighted consistent findings in the right caudate head(11 of 14 analyses),right insula(10 of 14 analyses),and right lentiform nucleus putamen(11 of 14 analyses).CONCLUSION Structural alterations in the right caudate head,right insula,and right lentiform nucleus putamen in young MDD patients may contribute to its recurrent nature,offering insights for targeted therapies.展开更多
The noise that comes from finite element simulation often causes the model to fall into the local optimal solution and over fitting during optimization of generator.Thus,this paper proposes a Gaussian Process Regressi...The noise that comes from finite element simulation often causes the model to fall into the local optimal solution and over fitting during optimization of generator.Thus,this paper proposes a Gaussian Process Regression(GPR)model based on Conditional Likelihood Lower Bound Search(CLLBS)to optimize the design of the generator,which can filter the noise in the data and search for global optimization by combining the Conditional Likelihood Lower Bound Search method.Taking the efficiency optimization of 15 kW Permanent Magnet Synchronous Motor as an example.Firstly,this method uses the elementary effect analysis to choose the sensitive variables,combining the evolutionary algorithm to design the super Latin cube sampling plan;Then the generator-converter system is simulated by establishing a co-simulation platform to obtain data.A Gaussian process regression model combing the method of the conditional likelihood lower bound search is established,which combined the chi-square test to optimize the accuracy of the model globally.Secondly,after the model reaches the accuracy,the Pareto frontier is obtained through the NSGA-II algorithm by considering the maximum output torque as a constraint.Last,the constrained optimization is transformed into an unconstrained optimizing problem by introducing maximum constrained improvement expectation(CEI)optimization method based on the re-interpolation model,which cross-validated the optimization results of the Gaussian process regression model.The above method increase the efficiency of generator by 0.76%and 0.5%respectively;And this method can be used for rapid modeling and multi-objective optimization of generator systems.展开更多
The calibration of transfer functions is essential for accurate pavement performance predictions in the PavementME design. Several studies have used the least square approach to calibrate these transfer functions. Lea...The calibration of transfer functions is essential for accurate pavement performance predictions in the PavementME design. Several studies have used the least square approach to calibrate these transfer functions. Least square is a widely used simplistic approach based on certain assumptions. Literature shows that these least square approach assumptions may not apply to the non-normal distributions. This study introduces a new methodology for calibrating the transverse cracking and international roughness index(IRI) models in rigid pavements using maximum likelihood estimation(MLE). Synthetic data for transverse cracking, with and without variability, are generated to illustrate the applicability of MLE using different known probability distributions(exponential,gamma, log-normal, and negative binomial). The approach uses measured data from the Michigan Department of Transportation's(MDOT) pavement management system(PMS) database for 70 jointed plain concrete pavement(JPCP) sections to calibrate and validate transfer functions. The MLE approach is combined with resampling techniques to improve the robustness of calibration coefficients. The results show that the MLE transverse cracking model using the gamma distribution consistently outperforms the least square for synthetic and observed data. For observed data, MLE estimates of parameters produced lower SSE and bias than least squares(e.g., for the transverse cracking model, the SSE values are 3.98 vs. 4.02, and the bias values are 0.00 and-0.41). Although negative binomial distribution is the most suitable fit for the IRI model for MLE, the least square results are slightly better than MLE. The bias values are-0.312 and 0.000 for the MLE and least square methods. Overall, the findings indicate that MLE is a robust method for calibration, especially for non-normally distributed data such as transverse cracking.展开更多
随着风电渗透率的持续上升,电力系统的惯量水平显著下降,对系统频率稳定性构成了新的挑战。为有效评估风电并网情况下电力系统节点惯量的变化,提出了一种基于受控自回归滑动平均(autoregressive moving average with exogenous variable...随着风电渗透率的持续上升,电力系统的惯量水平显著下降,对系统频率稳定性构成了新的挑战。为有效评估风电并网情况下电力系统节点惯量的变化,提出了一种基于受控自回归滑动平均(autoregressive moving average with exogenous variable,ARMAX)模型的改进最大似然估计(maximum likelihood estimation,MLE)参数辨识方法对系统机组直接相连节点进行惯量评估。首先,构建ARMAX模型对发电机组直接相连节点的动态特性进行建模,并利用改进MLE对模型参数进行辨识,以评估与机组直接相连的节点惯量。然后,基于k-means聚类算法对发电机组节点惯量进行分区,计算得到系统区域惯量和中心频率,并进一步对非发电机组节点频率进行自适应多项式拟合计算,得到其系统节点惯量。最后,搭建IEEE39含风力发电机组节点系统,绘制热力图直观展示电力系统节点和区域的惯量分布,验证了所提改进方法的有效性。该方法有助于精准识别系统中不同节点的动态响应特性,为风电并网系统的分析和规划提供了有力支持。展开更多
基金supported by the National Natural Science Foundation of China Civil Aviation Joint Fund (U1833110)Research on the Dual Prevention Mechanism and Intelligent Management Technology f or Civil Aviation Safety Risks (YK23-03-05)。
文摘Aviation accidents are currently one of the leading causes of significant injuries and deaths worldwide. This entices researchers to investigate aircraft safety using data analysis approaches based on an advanced machine learning algorithm.To assess aviation safety and identify the causes of incidents, a classification model with light gradient boosting machine (LGBM)based on the aviation safety reporting system (ASRS) has been developed. It is improved by k-fold cross-validation with hybrid sampling model (HSCV), which may boost classification performance and maintain data balance. The results show that employing the LGBM-HSCV model can significantly improve accuracy while alleviating data imbalance. Vertical comparison with other cross-validation (CV) methods and lateral comparison with different fold times comprise the comparative approach. Aside from the comparison, two further CV approaches based on the improved method in this study are discussed:one with a different sampling and folding order, and the other with more CV. According to the assessment indices with different methods, the LGBMHSCV model proposed here is effective at detecting incident causes. The improved model for imbalanced data categorization proposed may serve as a point of reference for similar data processing, and the model’s accurate identification of civil aviation incident causes can assist to improve civil aviation safety.
基金Supported by the National Natural Science Foundation of China(12061017,12361055)the Research Fund of Guangxi Key Lab of Multi-source Information Mining&Security(22-A-01-01)。
文摘Existing blockwise empirical likelihood(BEL)method blocks the observations or their analogues,which is proven useful under some dependent data settings.In this paper,we introduce a new BEL(NBEL)method by blocking the scoring functions under high dimensional cases.We study the construction of confidence regions for the parameters in spatial autoregressive models with spatial autoregressive disturbances(SARAR models)with high dimension of parameters by using the NBEL method.It is shown that the NBEL ratio statistics are asymptoticallyχ^(2)-type distributed,which are used to obtain the NBEL based confidence regions for the parameters in SARAR models.A simulation study is conducted to compare the performances of the NBEL and the usual EL methods.
基金supported in part by the National Key Research and Development Program of China(2023YFB3906403)the National Natural Science Foundation of China(62373118,62173105)the Natural Science Foundation of Heilongjiang Province of China(ZD2023F002)
文摘Over the past few decades, numerous adaptive Kalman filters(AKFs) have been proposed. However, achieving online estimation with both high estimation accuracy and fast convergence speed is challenging, especially when both the process noise and measurement noise covariance matrices are relatively inaccurate. Maximum likelihood estimation(MLE) possesses the potential to achieve this goal, since its theoretical accuracy is guaranteed by asymptotic optimality and the convergence speed is fast due to weak dependence on accurate state estimation.Unfortunately, the maximum likelihood cost function is so intricate that the existing MLE methods can only simply ignore all historical measurement information to achieve online estimation,which cannot adequately realize the potential of MLE. In order to design online MLE-based AKFs with high estimation accuracy and fast convergence speed, an online exploratory MLE approach is proposed, based on which a mini-batch coordinate descent noise covariance matrix estimation framework is developed. In this framework, the maximum likelihood cost function is simplified for online estimation with fewer and simpler terms which are selected in a mini-batch and calculated with a backtracking method. This maximum likelihood cost function is sidestepped and solved by exploring possible estimated noise covariance matrices adaptively while the historical measurement information is adequately utilized. Furthermore, four specific algorithms are derived under this framework to meet different practical requirements in terms of convergence speed, estimation accuracy,and calculation load. Abundant simulations and experiments are carried out to verify the validity and superiority of the proposed algorithms as compared with existing state-of-the-art AKFs.
基金Supported by National Natural Science Foundation of China,No.82460282Guizhou Province Science and Technology Plan Project,No.ZK-2023-195+1 种基金Guizhou High-Level Innovative Talent Project,No.gzwjrs2022-013Health Commission of Guizhou Province Project,No.gzwkj2024-475 and No.gzwkj2021-150.
文摘BACKGROUND Attention deficit hyperactivity disorder(ADHD)is a prevalent neurodevelopmental disorder in adolescents characterized by inattention,hyperactivity,and impulsivity,which impact cognitive,behavioral,and emotional functioning.Resting-state functional magnetic resonance imaging(rs-fMRI)provides critical insights into the functional architecture of the brain in ADHD.Despite extensive research,specific brain regions consistently affected in ADHD patients during these formative years have not been comprehensively delineated.AIM To identify consistent vulnerable brain regions in adolescent ADHD patients using rs-fMRI and activation likelihood estimation(ALE)meta-analysis.METHODS We conducted a comprehensive literature search up to August 31,2024,to identify studies investigating functional brain alterations in adolescents with ADHD.We utilized regional homogeneity(ReHo),amplitude of low-frequency fluctuations(ALFF),dynamic ALFF(dALFF)and fractional ALFF(fALFF)analyses.We compared the regions of aberrant spontaneous neural activity in adolescents with ADHD with those in healthy controls(HCs)using ALE.RESULTS Fifteen studies(468 adolescent ADHD patients and 466 HCs)were included.Combining the ReHo and ALFF/fALFF/dALFF data,the results revealed increased activity in the right lingual gyrus[LING,Brodmann Area(BA)18],left LING(BA 18),and right cuneus(CUN,BA 23)in adolescent ADHD patients compared with HCs(voxel size:592-32 mm³,P<0.05).Decreased activity was observed in the left medial frontal gyrus(MFG,BA 9)and left precuneus(PCUN,BA 31)in adolescent ADHD patients compared with HCs(voxel size:960-456 mm³,P<0.05).Jackknife sensitivity analyses demonstrated robust reproducibility in 11 of the 13 tests for the right LING,left LING,and right CUN and in 11 of the 14 tests for the left MFG and left PCUN.CONCLUSION We identified specific brain regions with both increased and decreased activity in adolescent ADHD patients,enhancing our understanding of the neural alterations that occur during this pivotal stage of development.
文摘In deriving a regression model analysts often have to use variable selection, despite of problems introduced by data- dependent model building. Resampling approaches are proposed to handle some of the critical issues. In order to assess and compare several strategies, we will conduct a simulation study with 15 predictors and a complex correlation structure in the linear regression model. Using sample sizes of 100 and 400 and estimates of the residual variance corresponding to R2 of 0.50 and 0.71, we consider 4 scenarios with varying amount of information. We also consider two examples with 24 and 13 predictors, respectively. We will discuss the value of cross-validation, shrinkage and backward elimination (BE) with varying significance level. We will assess whether 2-step approaches using global or parameterwise shrinkage (PWSF) can improve selected models and will compare results to models derived with the LASSO procedure. Beside of MSE we will use model sparsity and further criteria for model assessment. The amount of information in the data has an influence on the selected models and the comparison of the procedures. None of the approaches was best in all scenarios. The performance of backward elimination with a suitably chosen significance level was not worse compared to the LASSO and BE models selected were much sparser, an important advantage for interpretation and transportability. Compared to global shrinkage, PWSF had better performance. Provided that the amount of information is not too small, we conclude that BE followed by PWSF is a suitable approach when variable selection is a key part of data analysis.
文摘For the nonparametric regression model Y-ni = g(x(ni)) + epsilon(ni)i = 1, ..., n, with regularly spaced nonrandom design, the authors study the behavior of the nonlinear wavelet estimator of g(x). When the threshold and truncation parameters are chosen by cross-validation on the everage squared error, strong consistency for the case of dyadic sample size and moment consistency for arbitrary sample size are established under some regular conditions.
基金supported in part by National Sciences Foundation of China grant ( 11672001)Jiangsu Province Science and Technology Agency grant ( BE2016785)supported in part by Postgraduate Research & Practice Innovation Program of Jiangsu Province grant ( KYCX18_0156)
文摘Background Cardiovascular diseases are closely linked to atherosclerotic plaque development and rupture.Plaque progression prediction is of fundamental significance to cardiovascular research and disease diagnosis,prevention,and treatment.Generalized linear mixed models(GLMM)is an extension of linear model for categorical responses while considering the correlation among observations.Methods Magnetic resonance image(MRI)data of carotid atheroscleroticplaques were acquired from 20 patients with consent obtained and 3D thin-layer models were constructed to calculate plaque stress and strain for plaque progression prediction.Data for ten morphological and biomechanical risk factors included wall thickness(WT),lipid percent(LP),minimum cap thickness(MinCT),plaque area(PA),plaque burden(PB),lumen area(LA),maximum plaque wall stress(MPWS),maximum plaque wall strain(MPWSn),average plaque wall stress(APWS),and average plaque wall strain(APWSn)were extracted from all slices for analysis.Wall thickness increase(WTI),plaque burden increase(PBI)and plaque area increase(PAI) were chosen as three measures for plaque progression.Generalized linear mixed models(GLMM)with 5-fold cross-validation strategy were used to calculate prediction accuracy for each predictor and identify optimal predictor with the highest prediction accuracy defined as sum of sensitivity and specificity.All 201 MRI slices were randomly divided into 4 training subgroups and 1 verification subgroup.The training subgroups were used for model fitting,and the verification subgroup was used to estimate the model.All combinations(total1023)of 10 risk factors were feed to GLMM and the prediction accuracy of each predictor were selected from the point on the ROC(receiver operating characteristic)curve with the highest sum of specificity and sensitivity.Results LA was the best single predictor for PBI with the highest prediction accuracy(1.360 1),and the area under of the ROC curve(AUC)is0.654 0,followed by APWSn(1.336 3)with AUC=0.6342.The optimal predictor among all possible combinations for PBI was the combination of LA,PA,LP,WT,MPWS and MPWSn with prediction accuracy=1.414 6(AUC=0.715 8).LA was once again the best single predictor for PAI with the highest prediction accuracy(1.184 6)with AUC=0.606 4,followed by MPWSn(1. 183 2)with AUC=0.6084.The combination of PA,PB,WT,MPWS,MPWSn and APWSn gave the best prediction accuracy(1.302 5)for PAI,and the AUC value is 0.6657.PA was the best single predictor for WTI with highest prediction accuracy(1.288 7)with AUC=0.641 5,followed by WT(1.254 0),with AUC=0.6097.The combination of PA,PB,WT,LP,MinCT,MPWS and MPWS was the best predictor for WTI with prediction accuracy as 1.314 0,with AUC=0.6552.This indicated that PBI was a more predictable measure than WTI and PAI. The combinational predictors improved prediction accuracy by 9.95%,4.01%and 1.96%over the best single predictors for PAI,PBI and WTI(AUC values improved by9.78%,9.45%,and 2.14%),respectively.Conclusions The use of GLMM with 5-fold cross-validation strategy combining both morphological and biomechanical risk factors could potentially improve the accuracy of carotid plaque progression prediction.This study suggests that a linear combination of multiple predictors can provide potential improvement to existing plaque assessment schemes.
基金Supported by the National Natural Science Foundation of China(12061017,12161009)the Research Fund of Guangxi Key Lab of Multi-source Information Mining&Security(22-A-01-01)。
文摘In this paper,we study spatial cross-sectional data models in the form of matrix exponential spatial specification(MESS),where MESS appears in both dependent and error terms.The empirical likelihood(EL)ratio statistics are established for the parameters of the MESS model.It is shown that the limiting distributions of EL ratio statistics follow chi-square distributions,which are used to construct the confidence regions of model parameters.Simulation experiments are conducted to compare the performances of confidence regions based on EL method and normal approximation method.
基金supported by the Fund for Foreign Scholars in University Research and Teaching Programs (B18039)。
文摘Beamspace super-resolution methods for elevation estimation in multipath environment has attracted significant attention, especially the beamspace maximum likelihood(BML)algorithm. However, the difference beam is rarely used in superresolution methods, especially in low elevation estimation. The target airspace information in the difference beam is different from the target airspace information in the sum beam. And the use of difference beams does not significantly increase the complexity of the system and algorithms. Thus, this paper applies the difference beam to the beamformer to improve the elevation estimation performance of BML algorithm. And the direction and number of beams can be adjusted according to the actual needs. The theoretical target elevation angle root means square error(RMSE) and the computational complexity of the proposed algorithms are analyzed. Finally, computer simulations and real data processing results demonstrate the effectiveness of the proposed algorithms.
文摘Count data is almost always over-dispersed where the variance exceeds the mean. Several count data models have been proposed by researchers but the problem of over-dispersion still remains unresolved, more so in the context of change point analysis. This study develops a likelihood-based algorithm that detects and estimates multiple change points in a set of count data assumed to follow the Negative Binomial distribution. Discrete change point procedures discussed in literature work well for equi-dispersed data. The new algorithm produces reliable estimates of change points in cases of both equi-dispersed and over-dispersed count data;hence its advantage over other count data change point techniques. The Negative Binomial Multiple Change Point Algorithm was tested using simulated data for different sample sizes and varying positions of change. Changes in the distribution parameters were detected and estimated by conducting a likelihood ratio test on several partitions of data obtained through step-wise recursive binary segmentation. Critical values for the likelihood ratio test were developed and used to check for significance of the maximum likelihood estimates of the change points. The change point algorithm was found to work best for large datasets, though it also works well for small and medium-sized datasets with little to no error in the location of change points. The algorithm correctly detects changes when present and fails to detect changes when change is absent in actual sense. Power analysis of the likelihood ratio test for change was performed through Monte-Carlo simulation in the single change point setting. Sensitivity analysis of the test power showed that likelihood ratio test is the most powerful when the simulated change points are located mid-way through the sample data as opposed to when changes were located in the periphery. Further, the test is more powerful when the change was located three-quarter-way through the sample data compared to when the change point is closer (quarter-way) to the first observation.
基金Supported by The 2024 Guizhou Provincial Health Commission Science and Technology Fund Project,No.gzwkj2024-47502022 Provincial Clinical Key Specialty Construction Project。
文摘BACKGROUND Adolescent major depressive disorder(MDD)is a significant mental health concern that often leads to recurrent depression in adulthood.Resting-state functional magnetic resonance imaging(rs-fMRI)offers unique insights into the neural mechanisms underlying this condition.However,despite previous research,the specific vulnerable brain regions affected in adolescent MDD patients have not been fully elucidated.AIM To identify consistent vulnerable brain regions in adolescent MDD patients using rs-fMRI and activation likelihood estimation(ALE)meta-analysis.METHODS We performed a comprehensive literature search through July 12,2023,for studies investigating brain functional changes in adolescent MDD patients.We utilized regional homogeneity(ReHo),amplitude of low-frequency fluctuations(ALFF)and fractional ALFF(fALFF)analyses.We compared the regions of aberrant spontaneous neural activity in adolescents with MDD vs healthy controls(HCs)using ALE.RESULTS Ten studies(369 adolescent MDD patients and 313 HCs)were included.Combining the ReHo and ALFF/fALFF data,the results revealed that the activity in the right cuneus and left precuneus was lower in the adolescent MDD patients than in the HCs(voxel size:648 mm3,P<0.05),and no brain region exhibited increased activity.Based on the ALFF data,we found decreased activity in the right cuneus and left precuneus in adolescent MDD patients(voxel size:736 mm3,P<0.05),with no regions exhibiting increased activity.CONCLUSION Through ALE meta-analysis,we consistently identified the right cuneus and left precuneus as vulnerable brain regions in adolescent MDD patients,increasing our understanding of the neuropathology of affected adolescents.
基金Supported by the Guizhou Province Science and Technology Plan Project,No.ZK-2023-1952021 Health Commission of Guizhou Province Project,No.gzwkj2021-150.
文摘BACKGROUND Major depressive disorder(MDD)in adolescents and young adults contributes significantly to global morbidity,with inconsistent findings on brain structural changes from structural magnetic resonance imaging studies.Activation likeli-hood estimation(ALE)offers a method to synthesize these diverse findings and identify consistent brain anomalies.METHODS We performed a comprehensive literature search in PubMed,Web of Science,Embase,and Chinese National Knowledge Infrastructure databases for neuroi-maging studies on MDD among adolescents and young adults published up to November 19,2023.Two independent researchers performed the study selection,quality assessment,and data extraction.The ALE technique was employed to synthesize findings on localized brain function anomalies in MDD patients,which was supplemented by sensitivity analyses.RESULTS Twenty-two studies comprising fourteen diffusion tensor imaging(DTI)studies and eight voxel-based morphome-try(VBM)studies,and involving 451 MDD patients and 465 healthy controls(HCs)for DTI and 664 MDD patients and 946 HCs for VBM,were included.DTI-based ALE demonstrated significant reductions in fractional anisotropy(FA)values in the right caudate head,right insula,and right lentiform nucleus putamen in adolescents and young adults with MDD compared to HCs,with no regions exhibiting increased FA values.VBM-based ALE did not demonstrate significant alterations in gray matter volume.Sensitivity analyses highlighted consistent findings in the right caudate head(11 of 14 analyses),right insula(10 of 14 analyses),and right lentiform nucleus putamen(11 of 14 analyses).CONCLUSION Structural alterations in the right caudate head,right insula,and right lentiform nucleus putamen in young MDD patients may contribute to its recurrent nature,offering insights for targeted therapies.
基金supported in part by the National Key Research and Development Program of China(2019YFB1503700)the Hunan Natural Science Foundation-Science and Education Joint Project(2019JJ70063)。
文摘The noise that comes from finite element simulation often causes the model to fall into the local optimal solution and over fitting during optimization of generator.Thus,this paper proposes a Gaussian Process Regression(GPR)model based on Conditional Likelihood Lower Bound Search(CLLBS)to optimize the design of the generator,which can filter the noise in the data and search for global optimization by combining the Conditional Likelihood Lower Bound Search method.Taking the efficiency optimization of 15 kW Permanent Magnet Synchronous Motor as an example.Firstly,this method uses the elementary effect analysis to choose the sensitive variables,combining the evolutionary algorithm to design the super Latin cube sampling plan;Then the generator-converter system is simulated by establishing a co-simulation platform to obtain data.A Gaussian process regression model combing the method of the conditional likelihood lower bound search is established,which combined the chi-square test to optimize the accuracy of the model globally.Secondly,after the model reaches the accuracy,the Pareto frontier is obtained through the NSGA-II algorithm by considering the maximum output torque as a constraint.Last,the constrained optimization is transformed into an unconstrained optimizing problem by introducing maximum constrained improvement expectation(CEI)optimization method based on the re-interpolation model,which cross-validated the optimization results of the Gaussian process regression model.The above method increase the efficiency of generator by 0.76%and 0.5%respectively;And this method can be used for rapid modeling and multi-objective optimization of generator systems.
基金the Michigan Department of Transportation (MDOT) for the financial support of this study (report no. SPR1723)。
文摘The calibration of transfer functions is essential for accurate pavement performance predictions in the PavementME design. Several studies have used the least square approach to calibrate these transfer functions. Least square is a widely used simplistic approach based on certain assumptions. Literature shows that these least square approach assumptions may not apply to the non-normal distributions. This study introduces a new methodology for calibrating the transverse cracking and international roughness index(IRI) models in rigid pavements using maximum likelihood estimation(MLE). Synthetic data for transverse cracking, with and without variability, are generated to illustrate the applicability of MLE using different known probability distributions(exponential,gamma, log-normal, and negative binomial). The approach uses measured data from the Michigan Department of Transportation's(MDOT) pavement management system(PMS) database for 70 jointed plain concrete pavement(JPCP) sections to calibrate and validate transfer functions. The MLE approach is combined with resampling techniques to improve the robustness of calibration coefficients. The results show that the MLE transverse cracking model using the gamma distribution consistently outperforms the least square for synthetic and observed data. For observed data, MLE estimates of parameters produced lower SSE and bias than least squares(e.g., for the transverse cracking model, the SSE values are 3.98 vs. 4.02, and the bias values are 0.00 and-0.41). Although negative binomial distribution is the most suitable fit for the IRI model for MLE, the least square results are slightly better than MLE. The bias values are-0.312 and 0.000 for the MLE and least square methods. Overall, the findings indicate that MLE is a robust method for calibration, especially for non-normally distributed data such as transverse cracking.
文摘随着风电渗透率的持续上升,电力系统的惯量水平显著下降,对系统频率稳定性构成了新的挑战。为有效评估风电并网情况下电力系统节点惯量的变化,提出了一种基于受控自回归滑动平均(autoregressive moving average with exogenous variable,ARMAX)模型的改进最大似然估计(maximum likelihood estimation,MLE)参数辨识方法对系统机组直接相连节点进行惯量评估。首先,构建ARMAX模型对发电机组直接相连节点的动态特性进行建模,并利用改进MLE对模型参数进行辨识,以评估与机组直接相连的节点惯量。然后,基于k-means聚类算法对发电机组节点惯量进行分区,计算得到系统区域惯量和中心频率,并进一步对非发电机组节点频率进行自适应多项式拟合计算,得到其系统节点惯量。最后,搭建IEEE39含风力发电机组节点系统,绘制热力图直观展示电力系统节点和区域的惯量分布,验证了所提改进方法的有效性。该方法有助于精准识别系统中不同节点的动态响应特性,为风电并网系统的分析和规划提供了有力支持。