Since the introduction of vision Transformers into the computer vision field,many vision tasks such as semantic segmentation tasks,have undergone radical changes.Although Transformer enhances the correlation of each l...Since the introduction of vision Transformers into the computer vision field,many vision tasks such as semantic segmentation tasks,have undergone radical changes.Although Transformer enhances the correlation of each local feature of an image object in the hidden space through the attention mechanism,it is difficult for a segmentation head to accomplish the mask prediction for dense embedding of multi-category and multi-local features.We present patch prototype vision Transformer(PPFormer),a Transformer architecture for semantic segmentation based on knowledge-embedded patch prototypes.1)The hierarchical Transformer encoder can generate multi-scale and multi-layered patch features including seamless patch projection to obtain information of multiscale patches,and feature-clustered self-attention to enhance the interplay of multi-layered visual information with implicit position encodes.2)PPFormer utilizes a non-parametric prototype decoder to extract region observations which represent significant parts of the objects by unlearnable patch prototypes and then calculate similarity between patch prototypes and pixel embeddings.The proposed contrasting patch prototype alignment module,which uses new patch prototypes to update prototype bank,effectively maintains class boundaries for prototypes.For different application scenarios,we have launched PPFormer-S,PPFormer-M and PPFormer-L by expanding the scale.Experimental results demonstrate that PPFormer can outperform fully convolutional networks(FCN)-and attention-based semantic segmentation models on the PASCAL VOC 2012,ADE20k,and Cityscapes datasets.展开更多
This study presents the results of a Monte Carlo simulation to compare the statistical power of Siegel-Tukey and Savage tests.The main purpose of the study is to evaluate the statistical power of both tests in scenari...This study presents the results of a Monte Carlo simulation to compare the statistical power of Siegel-Tukey and Savage tests.The main purpose of the study is to evaluate the statistical power of both tests in scenarios involving Normal,Platykurtic and Skewed distributions over different sample sizes and standard deviation values.In the study,standard deviation ratios were set as 2,3,4,1/2,1/3 and 1/4 and power comparisons were made between small and large sample sizes.For equal sample sizes,small sample sizes of 5,8,10,12,16 and 20 and large sample sizes of 25,50,75 and 100 were used.For different sample sizes,the combinations of(4,16),(8,16),(10,20),(16,4),(16,8)and(20,10)small sample sizes and(10,30),(30,10),(50,75),(50,100),(75,50),(75,100),(100,50)and(100,75)large sample sizes were examined in detail.According to the findings,the power analysis under variance heterogeneity conditions shows that the Siegel-Tukey test has a higher statistical power than the other nonparametric Savage test at small and large sample sizes.In particular,the Siegel-Tukey test was reported to offer higher precision and power under variance heterogeneity,regardless of having equal or different sample sizes.展开更多
Detecting changes in surface air temperature in mid-and low-altitude mountainous regions is essential for a comprehensive understanding of warming trend with altitude.We use daily surface air temperature data from 64 ...Detecting changes in surface air temperature in mid-and low-altitude mountainous regions is essential for a comprehensive understanding of warming trend with altitude.We use daily surface air temperature data from 64 meteorological stations in Wuyi Mountains and its adjacent regions to analyze the spatio-temporal patterns of temperature change.The results show that Wuyi Mountains have experienced significant warming from 1961 to 2018.The warming trend of the mean temperature is 0.20℃/decade,the maximum temperature is 0.17℃/decade,and the minimum temperature is 0.26℃/decade.In 1961-1990,more than 63%of the stations showed a decreasing trend in annual mean temperature,mainly because the maximum temperature decreased during this period.However,in 1971-2000,1981-2010 and 1991-2018,the maximum,minimum and mean temperatures increased.The fastest increasing trend of mean temperature occurred in the southeastern coastal plains,the quickest increasing trend of maximum temperature occurred in the northwestern mountainous region,and the increase of minimum temperature occurred faster in the southeastern coastal and northwestern mountainous regions than that in the central area.Meanwhile,this study suggests that elevation does not affect warming in the Wuyi Mountains.These results are beneficial for understanding climate change in humid subtropical middle and low mountains.展开更多
Current research on the dynamics and vibrations of geared rotor systems primarily focuses on deterministic models.However,uncertainties inevitably exist in the gear system,which cause uncertainties in system parameter...Current research on the dynamics and vibrations of geared rotor systems primarily focuses on deterministic models.However,uncertainties inevitably exist in the gear system,which cause uncertainties in system parameters and subsequently influence the accurate evaluation of system dynamic behavior.In this study,a dynamic model of a geared rotor system with mixed parameters and model uncertainties is proposed.Initially,the dynamic model of the geared rotor-bearing system with deterministic parameters is established using a finite element method.Subsequently,a nonparametric method is introduced to model the hybrid uncertainties in the dynamic model.Deviation coefficients and dispersion parameters are used to reflect the levels of parameter and model uncertainty.For example,the study evaluates the effects of uncertain bearing and mesh stiffness on the vibration responses of a geared rotor system.The results demonstrate that the influence of uncertainty varies among different model types.Model uncertainties have a more significant than parametric uncertainties,whereas hybrid uncertainties increase the nonlinearities and complexities of the system’s dynamic responses.These findings provide valuable insights into understanding the dynamic behavior of geared system with hybrid uncertainties.展开更多
The present paper deals with the problem of nonparametric kernel density estimation of the trend function for stochastic processes driven by fractional Brownian motion of the second kind.The consistency,the rate of co...The present paper deals with the problem of nonparametric kernel density estimation of the trend function for stochastic processes driven by fractional Brownian motion of the second kind.The consistency,the rate of convergence,and the asymptotic normality of the kernel-type estimator are discussed.Besides,we prove that the rate of convergence of the kernel-type estimator depends on the smoothness of the trend of the nonperturbed system.展开更多
The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software w...The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.展开更多
Healthcare decisions are based on scientific evidence obtained from medical studies by gathering data and analyzing it to obtain the best results. When analyzing data, biostatistics is a powerful tool, but healthcare ...Healthcare decisions are based on scientific evidence obtained from medical studies by gathering data and analyzing it to obtain the best results. When analyzing data, biostatistics is a powerful tool, but healthcare professionals lack knowledge in this field. This lack of knowledge can manifest itself in situations such as choosing the wrong statistical test for the right situation or applying a statistical test without checking its assumptions, leading to inaccurate results and misleading conclusions. With the help of this “narrative review”, the aim is to bring biostatistics closer to healthcare professionals by answering certain questions: how to describe the distribution of data? how to assess the normality of data? how to transform data? and how to choose between nonparametric and parametric tests? Through this work, our hope is that the reader will be able to choose the right test for the right situation, in order to obtain the most accurate results.展开更多
This article develops a procedure for screening variables, in ultra high-di- mensional settings, based on their predictive significance. This is achieved by ranking the variables according to the variance of their res...This article develops a procedure for screening variables, in ultra high-di- mensional settings, based on their predictive significance. This is achieved by ranking the variables according to the variance of their respective marginal regression functions (RV-SIS). We show that, under some mild technical conditions, the RV-SIS possesses a sure screening property, which is defined by Fan and Lv (2008). Numerical comparisons suggest that RV-SIS has competitive performance compared to other screening procedures, and outperforms them in many different model settings.展开更多
This paper proposes a set of nonparametric statistical tools for analyzing the system resilience of civil structures and infrastructure and its migration upon changes in critical system parameters.The work is founded ...This paper proposes a set of nonparametric statistical tools for analyzing the system resilience of civil structures and infrastructure and its migration upon changes in critical system parameters.The work is founded on the classic theoretic framework that system resilience is defined in multiple dimensions for a constructed system.Consequentially,system resilience can lose its parametric form as a random variable,falling into the realm of nonparametric statistics.With this nonparametric shift,traditional distribution-based statistics are ineffective in characterizing the migration of system resilience due to the variation of system parameters.Three statistical tools are proposed under the nonparametric statistical resilience analysis(npSRA)framework,including nonparametric copula-based sensitivity analysis,two-sample resilience test analysis,and a novel tool for resilience attenuation analysis.To demonstrate the use of this framework,we focus on electric distribution systems,commonly found in many urban,suburban,and rural areas and vulnerable to tropical storms.A novel procedure for considering resourcefulness parameters in the socioeconomic space is proposed.Numerical results reveal the complex sta-tistical relations between the distributions of system resilience,physical aging,and socioeconomic parameters for the power distribution system.The proposed resilience distance computing and resilience attenuation anal-ysis further suggests two proper nonparametric distance metrics,the Earth Moving Distance(EMD)metric and the Cramévon Mises(CVM)metric,for characterizing the migration of system resilience for electric distribution systems.展开更多
The objectives of this paper are to demonstrate the algorithms employed by three statistical software programs (R, Real Statistics using Excel, and SPSS) for calculating the exact two-tailed probability of the Wald-Wo...The objectives of this paper are to demonstrate the algorithms employed by three statistical software programs (R, Real Statistics using Excel, and SPSS) for calculating the exact two-tailed probability of the Wald-Wolfowitz one-sample runs test for randomness, to present a novel approach for computing this probability, and to compare the four procedures by generating samples of 10 and 11 data points, varying the parameters n<sub>0</sub> (number of zeros) and n<sub>1</sub> (number of ones), as well as the number of runs. Fifty-nine samples are created to replicate the behavior of the distribution of the number of runs with 10 and 11 data points. The exact two-tailed probabilities for the four procedures were compared using Friedman’s test. Given the significant difference in central tendency, post-hoc comparisons were conducted using Conover’s test with Benjamini-Yekutielli correction. It is concluded that the procedures of Real Statistics using Excel and R exhibit some inadequacies in the calculation of the exact two-tailed probability, whereas the new proposal and the SPSS procedure are deemed more suitable. The proposed robust algorithm has a more transparent rationale than the SPSS one, albeit being somewhat more conservative. We recommend its implementation for this test and its application to others, such as the binomial and sign test.展开更多
This study examines the nexus between the good and bad volatilities of three technological revolutions—financial technology(FinTech),the Internet of Things,and artificial intelligence and technology—as well as the t...This study examines the nexus between the good and bad volatilities of three technological revolutions—financial technology(FinTech),the Internet of Things,and artificial intelligence and technology—as well as the two main conventional and Islamic cryptocurrency platforms,Bitcoin and Stellar,via three approaches:quantile cross-spectral coherence,quantile-VAR connectedness,and quantile-based non-linear causality-in-mean and variance analysis.The results are as follows:(1)under normal market conditions,in long-run horizons there is a significant positive cross-spectral relationship between FinTech's positive volatilities and Stellar’s negative volatilities;(2)Stellar’s negative and positive volatilities exhibit the highest net spillovers at the lower and upper tails,respectively;and(3)the quantile-based causality results indicate that Bitcoin’s good(bad)volatilities can lead to bad(good)volatilities in all three smart technologies operating between normal and bull market conditions.Moreover,the Bitcoin industry’s negative volatilities have a bilateral cause-and-effect relationship with FinTech’s positive volatilities.By analyzing the second moment,we found that Bitcoin's negative volatilities are the only cause variable that generates FinTech's good volatility in a unidirectional manner.As for Stellar,only bad volatilities have the potential to signal good volatilities for cutting-edge technologies in some middle quantiles,whereas good volatilities have no significant effect.Hence,the trade-off between Bitcoin and cutting-edge technologies,especially FinTech-related advancements,appear more broadly and randomly compared with the Stellar-innovative technologies nexus.The findings provide valuable insights for FinTech companies,blockchain developers,crypto-asset regulators,portfolio managers,and high-tech investors.展开更多
Normality testing is a fundamental hypothesis test in the statistical analysis of key biological indicators of diabetes.If this assumption is violated,it may cause the test results to deviate from the true value,leadi...Normality testing is a fundamental hypothesis test in the statistical analysis of key biological indicators of diabetes.If this assumption is violated,it may cause the test results to deviate from the true value,leading to incorrect inferences and conclusions,and ultimately affecting the validity and accuracy of statistical inferences.Considering this,the study designs a unified analysis scheme for different data types based on parametric statistical test methods and non-parametric test methods.The data were grouped according to sample type and divided into discrete data and continuous data.To account for differences among subgroups,the conventional chi-squared test was used for discrete data.The normal distribution is the basis of many statistical methods;if the data does not follow a normal distribution,many statistical methods will fail or produce incorrect results.Therefore,before data analysis and modeling,the data were divided into normal and non-normal groups through normality testing.For normally distributed data,parametric statistical methods were used to judge the differences between groups.For non-normal data,non-parametric tests were employed to improve the accuracy of the analysis.Statistically significant indicators were retained according to the significance index P-value of the statistical test or corresponding statistics.These indicators were then combined with relevant medical background to further explore the etiology leading to the occurrence or transformation of diabetes status.展开更多
In the multilevel thresholding segmentation of the image, the classification number is always given by the supervisor. To solve this problem, a fast multilevel thresholding algorithm considering both the threshold val...In the multilevel thresholding segmentation of the image, the classification number is always given by the supervisor. To solve this problem, a fast multilevel thresholding algorithm considering both the threshold value and the classification number is proposed based on the maximum entropy, and the self-adaptive criterion of the classification number is given. The algorithm can obtain thresholds and automatically decide the classification number. Experimental results show that the algorithm is effective.展开更多
A new algorithm based on the projection method with the implicit finite difference technique was established to calculate the velocity fields and pressure.The calculation region can be divided into different regions a...A new algorithm based on the projection method with the implicit finite difference technique was established to calculate the velocity fields and pressure.The calculation region can be divided into different regions according to Reynolds number.In the far-wall region,the thermal melt flow was calculated as Newtonian flow.In the near-wall region,the thermal melt flow was calculated as non-Newtonian flow.It was proved that the new algorithm based on the projection method with the implicit technique was correct through nonparametric statistics method and experiment.The simulation results show that the new algorithm based on the projection method with the implicit technique calculates more quickly than the solution algorithm-volume of fluid method using the explicit difference method.展开更多
A nonparametric Bayesian method is presented to classify the MPSK (M-ary phase shift keying) signals. The MPSK signals with unknown signal noise ratios (SNRs) are modeled as a Gaussian mixture model with unknown m...A nonparametric Bayesian method is presented to classify the MPSK (M-ary phase shift keying) signals. The MPSK signals with unknown signal noise ratios (SNRs) are modeled as a Gaussian mixture model with unknown means and covariances in the constellation plane, and a clustering method is proposed to estimate the probability density of the MPSK signals. The method is based on the nonparametric Bayesian inference, which introduces the Dirichlet process as the prior probability of the mixture coefficient, and applies a normal inverse Wishart (NIW) distribution as the prior probability of the unknown mean and covariance. Then, according to the received signals, the parameters are adjusted by the Monte Carlo Markov chain (MCMC) random sampling algorithm. By iterations, the density estimation of the MPSK signals can be estimated. Simulation results show that the correct recognition ratio of 2/4/8PSK is greater than 95% under the condition that SNR 〉5 dB and 1 600 symbols are used in this method.展开更多
In time series modeling, the residuals are often checked for white noise and normality. In practice, the useful tests are Ljung Box test. Mcleod Li test and Lin Mudholkar test. In this paper, we present a nonparame...In time series modeling, the residuals are often checked for white noise and normality. In practice, the useful tests are Ljung Box test. Mcleod Li test and Lin Mudholkar test. In this paper, we present a nonparametric approach for checking the residuals of time series models. This approach is based on the maximal correlation coefficient ρ 2 * between the residuals and time t . The basic idea is to use the bootstrap to form the null distribution of the statistic ρ 2 * under the null hypothesis H 0:ρ 2 * =0. For calculating ρ 2 * , we proposes a ρ algorithm, analogous to ACE procedure. Power study shows this approach is more powerful than Ljung Box test. Meanwhile, some numerical results and two examples are reported in this paper.展开更多
Microarray gene expression data are analyzed by means of a Bayesian nonparametric model, with emphasis on prediction of future observables, yielding a method for selection of differentially expressed genes and the cor...Microarray gene expression data are analyzed by means of a Bayesian nonparametric model, with emphasis on prediction of future observables, yielding a method for selection of differentially expressed genes and the corresponding classifier.展开更多
Based on the runoff and meteorological data of Langan(兰干) Hydrological Station from 1957 to 2009 in Keriya(克里雅) River,the periodicities,abrupt changes,and trends of climate factors and runoff were investigate...Based on the runoff and meteorological data of Langan(兰干) Hydrological Station from 1957 to 2009 in Keriya(克里雅) River,the periodicities,abrupt changes,and trends of climate factors and runoff were investigated by wavelet analysis and nonparametric test;then,the future change of the annual runoff was predicted by a periodic trend superposition model.In succession,the influencing volumes of climate change on the annual runoff were separated from the observation values of the an-nual runoff in Keriya River.The results show that(1) temperature series increased significantly,while the annual runoff and precipitation of Keriya River increased insignificantly at the significant level of α=0.05;(2) the common periods of 9 and 15 years existed in the annual runoff evolution process,and the primary periods of temperature and precipitation were 9 and 22 years and 9 and 13 years,respec-tively;(3) the annual runoff did not vary simultaneously with the abrupt change of climate factors in the headstream;the abrupt points of annual runoff and temperature are at 1998 and 1980 year,and that of precipitation is not so significant;and(4) the annual runoff will experience a decrease trend in the future period;the total increasing volume owing to climate change is 23.154×108 m3 in the head-stream during the period of 1999-2009;however,the stream flow has been nearly utilized completely due to the human activities in the mainstream area of Keriya River.展开更多
Profile monitoring is used to check the stability of the quality of a product over time when the product quality is best represented by a function at each time point.However,most previous monitoring approaches have no...Profile monitoring is used to check the stability of the quality of a product over time when the product quality is best represented by a function at each time point.However,most previous monitoring approaches have not considered that the argument values may vary from profile to profile,which is common in practice.A novel nonparametric control scheme based on profile error is proposed for monitoring nonlinear profiles with varied argument values.The proposed scheme uses the metrics of profile error as the statistics to construct the control charts.More details about the design of this nonparametric scheme are also discussed.The monitoring performance of the combined control scheme is compared with that of alternative nonparametric methods via simulation.Simulation studies show that the combined scheme is effective in detecting parameter error and is sensitive to small shifts in the process.In addition,due to the properties of the charting statistics,the out-of-control signal can provide diagnostic information for the users.Finally,the implementation steps of the proposed monitoring scheme are given and applied for monitoring the blade manufacturing process.With the application in blade manufacturing of aircraft engines,the proposed nonparametric control scheme is effective,interpretable,and easy to apply.展开更多
基金supported in part by the Gansu Haizhi Characteristic Demonstration Project(No.GSHZTS2022-2).
文摘Since the introduction of vision Transformers into the computer vision field,many vision tasks such as semantic segmentation tasks,have undergone radical changes.Although Transformer enhances the correlation of each local feature of an image object in the hidden space through the attention mechanism,it is difficult for a segmentation head to accomplish the mask prediction for dense embedding of multi-category and multi-local features.We present patch prototype vision Transformer(PPFormer),a Transformer architecture for semantic segmentation based on knowledge-embedded patch prototypes.1)The hierarchical Transformer encoder can generate multi-scale and multi-layered patch features including seamless patch projection to obtain information of multiscale patches,and feature-clustered self-attention to enhance the interplay of multi-layered visual information with implicit position encodes.2)PPFormer utilizes a non-parametric prototype decoder to extract region observations which represent significant parts of the objects by unlearnable patch prototypes and then calculate similarity between patch prototypes and pixel embeddings.The proposed contrasting patch prototype alignment module,which uses new patch prototypes to update prototype bank,effectively maintains class boundaries for prototypes.For different application scenarios,we have launched PPFormer-S,PPFormer-M and PPFormer-L by expanding the scale.Experimental results demonstrate that PPFormer can outperform fully convolutional networks(FCN)-and attention-based semantic segmentation models on the PASCAL VOC 2012,ADE20k,and Cityscapes datasets.
文摘This study presents the results of a Monte Carlo simulation to compare the statistical power of Siegel-Tukey and Savage tests.The main purpose of the study is to evaluate the statistical power of both tests in scenarios involving Normal,Platykurtic and Skewed distributions over different sample sizes and standard deviation values.In the study,standard deviation ratios were set as 2,3,4,1/2,1/3 and 1/4 and power comparisons were made between small and large sample sizes.For equal sample sizes,small sample sizes of 5,8,10,12,16 and 20 and large sample sizes of 25,50,75 and 100 were used.For different sample sizes,the combinations of(4,16),(8,16),(10,20),(16,4),(16,8)and(20,10)small sample sizes and(10,30),(30,10),(50,75),(50,100),(75,50),(75,100),(100,50)and(100,75)large sample sizes were examined in detail.According to the findings,the power analysis under variance heterogeneity conditions shows that the Siegel-Tukey test has a higher statistical power than the other nonparametric Savage test at small and large sample sizes.In particular,the Siegel-Tukey test was reported to offer higher precision and power under variance heterogeneity,regardless of having equal or different sample sizes.
基金supported by the Projects for National Natural Science Foundation of China(U22A20554)the Natural Science Foundation of Fujian Province(2023J01285)+1 种基金the Public Welfare Scientific Institutions of Fujian Province(2022R1002005)the Scientific Project from Fujian Provincial Department of Science and Technology(2022Y0007).
文摘Detecting changes in surface air temperature in mid-and low-altitude mountainous regions is essential for a comprehensive understanding of warming trend with altitude.We use daily surface air temperature data from 64 meteorological stations in Wuyi Mountains and its adjacent regions to analyze the spatio-temporal patterns of temperature change.The results show that Wuyi Mountains have experienced significant warming from 1961 to 2018.The warming trend of the mean temperature is 0.20℃/decade,the maximum temperature is 0.17℃/decade,and the minimum temperature is 0.26℃/decade.In 1961-1990,more than 63%of the stations showed a decreasing trend in annual mean temperature,mainly because the maximum temperature decreased during this period.However,in 1971-2000,1981-2010 and 1991-2018,the maximum,minimum and mean temperatures increased.The fastest increasing trend of mean temperature occurred in the southeastern coastal plains,the quickest increasing trend of maximum temperature occurred in the northwestern mountainous region,and the increase of minimum temperature occurred faster in the southeastern coastal and northwestern mountainous regions than that in the central area.Meanwhile,this study suggests that elevation does not affect warming in the Wuyi Mountains.These results are beneficial for understanding climate change in humid subtropical middle and low mountains.
基金Supported by National Natural Science Foundation of China(Grant Nos.12072106,52005156)National Key Research and Development Program of China(Grant No.2020YFB2008101)Foundation of Henan Key Laboratory of Superhard Abrasives and Grinding Equipment,Henan University of Technology of China(Grant No.JDKFJJ2022002).
文摘Current research on the dynamics and vibrations of geared rotor systems primarily focuses on deterministic models.However,uncertainties inevitably exist in the gear system,which cause uncertainties in system parameters and subsequently influence the accurate evaluation of system dynamic behavior.In this study,a dynamic model of a geared rotor system with mixed parameters and model uncertainties is proposed.Initially,the dynamic model of the geared rotor-bearing system with deterministic parameters is established using a finite element method.Subsequently,a nonparametric method is introduced to model the hybrid uncertainties in the dynamic model.Deviation coefficients and dispersion parameters are used to reflect the levels of parameter and model uncertainty.For example,the study evaluates the effects of uncertain bearing and mesh stiffness on the vibration responses of a geared rotor system.The results demonstrate that the influence of uncertainty varies among different model types.Model uncertainties have a more significant than parametric uncertainties,whereas hybrid uncertainties increase the nonlinearities and complexities of the system’s dynamic responses.These findings provide valuable insights into understanding the dynamic behavior of geared system with hybrid uncertainties.
基金Supported by the National Natural Science Foundation of China(12101004)the Natural Science Research Project of Anhui Educational Committee(2023AH030021)the Research Startup Foundation for Introducing Talent of Anhui Polytechnic University(2020YQQ064)。
文摘The present paper deals with the problem of nonparametric kernel density estimation of the trend function for stochastic processes driven by fractional Brownian motion of the second kind.The consistency,the rate of convergence,and the asymptotic normality of the kernel-type estimator are discussed.Besides,we prove that the rate of convergence of the kernel-type estimator depends on the smoothness of the trend of the nonperturbed system.
文摘The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.
文摘Healthcare decisions are based on scientific evidence obtained from medical studies by gathering data and analyzing it to obtain the best results. When analyzing data, biostatistics is a powerful tool, but healthcare professionals lack knowledge in this field. This lack of knowledge can manifest itself in situations such as choosing the wrong statistical test for the right situation or applying a statistical test without checking its assumptions, leading to inaccurate results and misleading conclusions. With the help of this “narrative review”, the aim is to bring biostatistics closer to healthcare professionals by answering certain questions: how to describe the distribution of data? how to assess the normality of data? how to transform data? and how to choose between nonparametric and parametric tests? Through this work, our hope is that the reader will be able to choose the right test for the right situation, in order to obtain the most accurate results.
文摘This article develops a procedure for screening variables, in ultra high-di- mensional settings, based on their predictive significance. This is achieved by ranking the variables according to the variance of their respective marginal regression functions (RV-SIS). We show that, under some mild technical conditions, the RV-SIS possesses a sure screening property, which is defined by Fan and Lv (2008). Numerical comparisons suggest that RV-SIS has competitive performance compared to other screening procedures, and outperforms them in many different model settings.
基金supported by the National Science Foundation(NSF)under Award Number IIA-1355406.
文摘This paper proposes a set of nonparametric statistical tools for analyzing the system resilience of civil structures and infrastructure and its migration upon changes in critical system parameters.The work is founded on the classic theoretic framework that system resilience is defined in multiple dimensions for a constructed system.Consequentially,system resilience can lose its parametric form as a random variable,falling into the realm of nonparametric statistics.With this nonparametric shift,traditional distribution-based statistics are ineffective in characterizing the migration of system resilience due to the variation of system parameters.Three statistical tools are proposed under the nonparametric statistical resilience analysis(npSRA)framework,including nonparametric copula-based sensitivity analysis,two-sample resilience test analysis,and a novel tool for resilience attenuation analysis.To demonstrate the use of this framework,we focus on electric distribution systems,commonly found in many urban,suburban,and rural areas and vulnerable to tropical storms.A novel procedure for considering resourcefulness parameters in the socioeconomic space is proposed.Numerical results reveal the complex sta-tistical relations between the distributions of system resilience,physical aging,and socioeconomic parameters for the power distribution system.The proposed resilience distance computing and resilience attenuation anal-ysis further suggests two proper nonparametric distance metrics,the Earth Moving Distance(EMD)metric and the Cramévon Mises(CVM)metric,for characterizing the migration of system resilience for electric distribution systems.
文摘The objectives of this paper are to demonstrate the algorithms employed by three statistical software programs (R, Real Statistics using Excel, and SPSS) for calculating the exact two-tailed probability of the Wald-Wolfowitz one-sample runs test for randomness, to present a novel approach for computing this probability, and to compare the four procedures by generating samples of 10 and 11 data points, varying the parameters n<sub>0</sub> (number of zeros) and n<sub>1</sub> (number of ones), as well as the number of runs. Fifty-nine samples are created to replicate the behavior of the distribution of the number of runs with 10 and 11 data points. The exact two-tailed probabilities for the four procedures were compared using Friedman’s test. Given the significant difference in central tendency, post-hoc comparisons were conducted using Conover’s test with Benjamini-Yekutielli correction. It is concluded that the procedures of Real Statistics using Excel and R exhibit some inadequacies in the calculation of the exact two-tailed probability, whereas the new proposal and the SPSS procedure are deemed more suitable. The proposed robust algorithm has a more transparent rationale than the SPSS one, albeit being somewhat more conservative. We recommend its implementation for this test and its application to others, such as the binomial and sign test.
文摘This study examines the nexus between the good and bad volatilities of three technological revolutions—financial technology(FinTech),the Internet of Things,and artificial intelligence and technology—as well as the two main conventional and Islamic cryptocurrency platforms,Bitcoin and Stellar,via three approaches:quantile cross-spectral coherence,quantile-VAR connectedness,and quantile-based non-linear causality-in-mean and variance analysis.The results are as follows:(1)under normal market conditions,in long-run horizons there is a significant positive cross-spectral relationship between FinTech's positive volatilities and Stellar’s negative volatilities;(2)Stellar’s negative and positive volatilities exhibit the highest net spillovers at the lower and upper tails,respectively;and(3)the quantile-based causality results indicate that Bitcoin’s good(bad)volatilities can lead to bad(good)volatilities in all three smart technologies operating between normal and bull market conditions.Moreover,the Bitcoin industry’s negative volatilities have a bilateral cause-and-effect relationship with FinTech’s positive volatilities.By analyzing the second moment,we found that Bitcoin's negative volatilities are the only cause variable that generates FinTech's good volatility in a unidirectional manner.As for Stellar,only bad volatilities have the potential to signal good volatilities for cutting-edge technologies in some middle quantiles,whereas good volatilities have no significant effect.Hence,the trade-off between Bitcoin and cutting-edge technologies,especially FinTech-related advancements,appear more broadly and randomly compared with the Stellar-innovative technologies nexus.The findings provide valuable insights for FinTech companies,blockchain developers,crypto-asset regulators,portfolio managers,and high-tech investors.
基金National Natural Science Foundation of China(No.12271261)Postgraduate Research and Practice Innovation Program of Jiangsu Province,China(Grant No.SJCX230368)。
文摘Normality testing is a fundamental hypothesis test in the statistical analysis of key biological indicators of diabetes.If this assumption is violated,it may cause the test results to deviate from the true value,leading to incorrect inferences and conclusions,and ultimately affecting the validity and accuracy of statistical inferences.Considering this,the study designs a unified analysis scheme for different data types based on parametric statistical test methods and non-parametric test methods.The data were grouped according to sample type and divided into discrete data and continuous data.To account for differences among subgroups,the conventional chi-squared test was used for discrete data.The normal distribution is the basis of many statistical methods;if the data does not follow a normal distribution,many statistical methods will fail or produce incorrect results.Therefore,before data analysis and modeling,the data were divided into normal and non-normal groups through normality testing.For normally distributed data,parametric statistical methods were used to judge the differences between groups.For non-normal data,non-parametric tests were employed to improve the accuracy of the analysis.Statistically significant indicators were retained according to the significance index P-value of the statistical test or corresponding statistics.These indicators were then combined with relevant medical background to further explore the etiology leading to the occurrence or transformation of diabetes status.
文摘In the multilevel thresholding segmentation of the image, the classification number is always given by the supervisor. To solve this problem, a fast multilevel thresholding algorithm considering both the threshold value and the classification number is proposed based on the maximum entropy, and the self-adaptive criterion of the classification number is given. The algorithm can obtain thresholds and automatically decide the classification number. Experimental results show that the algorithm is effective.
基金Project (50975263) supported by the National Natural Science Foundation of ChinaProject (2010081015) supported by International Cooperation Project of Shanxi Province, China+1 种基金 Project (2010-78) supported by the Scholarship Council in Shanxi province, ChinaProject (2010420120005) supported by Doctoral Fund of Ministry of Education of China
文摘A new algorithm based on the projection method with the implicit finite difference technique was established to calculate the velocity fields and pressure.The calculation region can be divided into different regions according to Reynolds number.In the far-wall region,the thermal melt flow was calculated as Newtonian flow.In the near-wall region,the thermal melt flow was calculated as non-Newtonian flow.It was proved that the new algorithm based on the projection method with the implicit technique was correct through nonparametric statistics method and experiment.The simulation results show that the new algorithm based on the projection method with the implicit technique calculates more quickly than the solution algorithm-volume of fluid method using the explicit difference method.
基金Cultivation Fund of the Key Scientific and Technical Innovation Project of Ministry of Education of China(No.3104001014)
文摘A nonparametric Bayesian method is presented to classify the MPSK (M-ary phase shift keying) signals. The MPSK signals with unknown signal noise ratios (SNRs) are modeled as a Gaussian mixture model with unknown means and covariances in the constellation plane, and a clustering method is proposed to estimate the probability density of the MPSK signals. The method is based on the nonparametric Bayesian inference, which introduces the Dirichlet process as the prior probability of the mixture coefficient, and applies a normal inverse Wishart (NIW) distribution as the prior probability of the unknown mean and covariance. Then, according to the received signals, the parameters are adjusted by the Monte Carlo Markov chain (MCMC) random sampling algorithm. By iterations, the density estimation of the MPSK signals can be estimated. Simulation results show that the correct recognition ratio of 2/4/8PSK is greater than 95% under the condition that SNR 〉5 dB and 1 600 symbols are used in this method.
文摘In time series modeling, the residuals are often checked for white noise and normality. In practice, the useful tests are Ljung Box test. Mcleod Li test and Lin Mudholkar test. In this paper, we present a nonparametric approach for checking the residuals of time series models. This approach is based on the maximal correlation coefficient ρ 2 * between the residuals and time t . The basic idea is to use the bootstrap to form the null distribution of the statistic ρ 2 * under the null hypothesis H 0:ρ 2 * =0. For calculating ρ 2 * , we proposes a ρ algorithm, analogous to ACE procedure. Power study shows this approach is more powerful than Ljung Box test. Meanwhile, some numerical results and two examples are reported in this paper.
文摘Microarray gene expression data are analyzed by means of a Bayesian nonparametric model, with emphasis on prediction of future observables, yielding a method for selection of differentially expressed genes and the corresponding classifier.
基金supported by the National Basic Research Program of China (No. 2009CB421308)the Ministry of Water Resources Special Fund for Scientific Research on Public Causes (No. 201101049)
文摘Based on the runoff and meteorological data of Langan(兰干) Hydrological Station from 1957 to 2009 in Keriya(克里雅) River,the periodicities,abrupt changes,and trends of climate factors and runoff were investigated by wavelet analysis and nonparametric test;then,the future change of the annual runoff was predicted by a periodic trend superposition model.In succession,the influencing volumes of climate change on the annual runoff were separated from the observation values of the an-nual runoff in Keriya River.The results show that(1) temperature series increased significantly,while the annual runoff and precipitation of Keriya River increased insignificantly at the significant level of α=0.05;(2) the common periods of 9 and 15 years existed in the annual runoff evolution process,and the primary periods of temperature and precipitation were 9 and 22 years and 9 and 13 years,respec-tively;(3) the annual runoff did not vary simultaneously with the abrupt change of climate factors in the headstream;the abrupt points of annual runoff and temperature are at 1998 and 1980 year,and that of precipitation is not so significant;and(4) the annual runoff will experience a decrease trend in the future period;the total increasing volume owing to climate change is 23.154×108 m3 in the head-stream during the period of 1999-2009;however,the stream flow has been nearly utilized completely due to the human activities in the mainstream area of Keriya River.
基金supported by National Natural Science Foundation of China (Grant No. 70931004,Grant No. 70802043)
文摘Profile monitoring is used to check the stability of the quality of a product over time when the product quality is best represented by a function at each time point.However,most previous monitoring approaches have not considered that the argument values may vary from profile to profile,which is common in practice.A novel nonparametric control scheme based on profile error is proposed for monitoring nonlinear profiles with varied argument values.The proposed scheme uses the metrics of profile error as the statistics to construct the control charts.More details about the design of this nonparametric scheme are also discussed.The monitoring performance of the combined control scheme is compared with that of alternative nonparametric methods via simulation.Simulation studies show that the combined scheme is effective in detecting parameter error and is sensitive to small shifts in the process.In addition,due to the properties of the charting statistics,the out-of-control signal can provide diagnostic information for the users.Finally,the implementation steps of the proposed monitoring scheme are given and applied for monitoring the blade manufacturing process.With the application in blade manufacturing of aircraft engines,the proposed nonparametric control scheme is effective,interpretable,and easy to apply.