期刊文献+
共找到48篇文章
< 1 2 3 >
每页显示 20 50 100
The Horvitz-Thompson Weighting Method for Quantile Regression Estimation in the Presence of Missing Covariates
1
作者 Zhaoji CHU Lingnan TAI +2 位作者 Wei XIONG Xu GUO Maozai TIAN 《Journal of Mathematical Research with Applications》 CSCD 2021年第3期303-322,共20页
The lack of covariate data is one of the hotspots of modern statistical analysis.It often appears in surveys or interviews,and becomes more complex in the presence of heavy tailed,skewed,and heteroscedastic data.In th... The lack of covariate data is one of the hotspots of modern statistical analysis.It often appears in surveys or interviews,and becomes more complex in the presence of heavy tailed,skewed,and heteroscedastic data.In this sense,a robust quantile regression method is more concerned.This paper presents an inverse weighted quantile regression method to explore the relationship between response and covariates.This method has several advantages over the naive estimator.On the one hand,it uses all available data and the missing covariates are allowed to be heavily correlated with the response;on the other hand,the estimator is uniform and asymptotically normal at all quantile levels.The effectiveness of this method is verified by simulation.Finally,in order to illustrate the effectiveness of this method,we extend it to the more general case,multivariate case and nonparametric case. 展开更多
关键词 Robust quantile regression missing covariates selection probability Kernel estimator weighting method
原文传递
ETF construction on CRIX
2
作者 Konstantin Häusler Wolfgang Härdle 《Financial Innovation》 2025年第1期2661-2681,共21页
We construct an exchange-traded fund(ETF)based on the CRyptocurrency IndeX(CRIX),which closely maps nonstationary cryptocurrency(CC)dynamics by adapting the weights of its constituents dynamically.Our scenario analysi... We construct an exchange-traded fund(ETF)based on the CRyptocurrency IndeX(CRIX),which closely maps nonstationary cryptocurrency(CC)dynamics by adapting the weights of its constituents dynamically.Our scenario analysis considers the fee schedules of regulated CC exchanges,spreads obtained from order book data,and investment in-&outflows to the ETF are modelled stochastically.The scenario analysis yields valuable insights into the mechanisms,costs,and risks of this innovative financial product:i)although the composition of the CRIX ETF changes frequently(from 5 to 30 constituents),it remains robust in its core,as the weights of Bitcoin(BTC)and Ethereum(ETH)are robust over time;ii)on average,5.2%needs to be rebalanced on the rebalancing dates;iii)trading costs are low compared with traditional assets;iv)the liquidity of the CC sector increases significantly during the analysis period;spreads occur,especially for altcoins and increase with the size of the transactions.However,because BTC and ETH are the most affected by rebalancing,the cost of spreads remains limited. 展开更多
关键词 Cryptocurrency ETF CRIX Market dynamics
在线阅读 下载PDF
Recommending Friends Instantly in Location-based Mobile Social Networks 被引量:4
3
作者 QIAO Xiuquan SU Jianchong +4 位作者 ZHANG Jinsong XU Wangli WU Budan XUE Sida CHEN Junliang 《China Communications》 SCIE CSCD 2014年第2期109-127,共19页
Differently from the general online social network(OSN),locationbased mobile social network(LMSN),which seamlessly integrates mobile computing and social computing technologies,has unique characteristics of temporal,s... Differently from the general online social network(OSN),locationbased mobile social network(LMSN),which seamlessly integrates mobile computing and social computing technologies,has unique characteristics of temporal,spatial and social correlation.Recommending friends instantly based on current location of users in the real world has become increasingly popular in LMSN.However,the existing friend recommendation methods based on topological structures of a social network or non-topological information such as similar user profiles cannot well address the instant making friends in the real world.In this article,we analyze users' check-in behavior in a real LMSN site named Gowalla.According to this analysis,we present an approach of recommending friends instantly for LMSN users by considering the real-time physical location proximity,offline behavior similarity and friendship network information in the virtual community simultaneously.This approach effectively bridges the gap between the offline behavior of users in the real world and online friendship network information in the virtual community.Finally,we use the real user check-in dataset of Gowalla to verify the effectiveness of our approach. 展开更多
关键词 mobile social network service friend recommendation location-basedservice location proximity user behaviorsimilarity singular value decomposition
在线阅读 下载PDF
Regression Analysis of Dependent Current Status Data with Left-Truncation Under Linear Transformation Model
4
作者 ZHANG Mengyue ZHAO Shishun +2 位作者 XU Da HU Tao SUN Jianguo 《Journal of Systems Science & Complexity》 2025年第5期2066-2083,共18页
The paper discusses the regression analysis of current status data,which is common in various fields such as tumorigenic research and demographic studies.Analyzing this type of data poses a significant challenge and h... The paper discusses the regression analysis of current status data,which is common in various fields such as tumorigenic research and demographic studies.Analyzing this type of data poses a significant challenge and has recently gained considerable interest.Furthermore,the authors consider an even more difficult scenario where,apart from censoring,one also faces left-truncation and informative censoring,meaning that there is a potential correlation between the examination time and the failure time of interest.The authors propose a sieve maximum likelihood estimation(MLE)method and in the proposed method for inference,a copula-based procedure is applied to depict the informative censoring.Additionally,the authors utilise the splines to estimate the unknown nonparametric functions in the model,and the asymptotic properties of the proposed estimator are established.The simulation results indicate that the developed approach is effective in practice,and it has been successfully applied to a set of real data. 展开更多
关键词 COPULA current status data informative observation left-truncation linear transformation model splines
原文传递
Variable Selection for Interval-Censored Failure Time Data Under the Partly Linear Additive Generalized Odds Rate Model
5
作者 Yang Xu Shishun Zhao +1 位作者 Tao Hu Jianguo Sun 《Acta Mathematica Sinica,English Series》 2025年第10期2524-2554,共31页
This paper discusses variable selection for interval-censored failure time data,a general type of failure time data that commonly arise in many areas such as clinical trials and follow-up studies.Although some methods... This paper discusses variable selection for interval-censored failure time data,a general type of failure time data that commonly arise in many areas such as clinical trials and follow-up studies.Although some methods have been developed in the literature for the problem,most of the existing procedures apply only to specific models.In this paper,we consider the data arising from a general class of partly linear additive generalized odds rate models and propose a penalized variable selection approach through maximizing a derived penalized likelihood function.In the method,the Bernsetin polynomials are employed to approximate both the unknown baseline hazard functions and the nonlinear covariate effects functions,and for the implementation of the method,a coordinate descent algorithm is developed.Also the asymptotic properties of the proposed estimators,including the oracle property,are established.An extensive simulation study is conducted to assess the finite-sample performance of the proposed estimators and indicates that it works well in practice.Finally,the proposed method is applied to a set of real data on Alzheimer’s disease. 展开更多
关键词 Bernstein polynomials generalized odds rate model interval-censored data oracle property partly linear additive model variable selection
原文传递
Additive Hazards Regression for Misclassified Current Status Data
6
作者 Wenshan Wang Shishun Zhao +1 位作者 Shuwei Li Jianguo Sun 《Communications in Mathematics and Statistics》 2025年第2期507-526,共20页
We discuss regression analysis of current status data with the additive hazards model when the failure status may suffer misclassification.Such data occur commonly in many scientific fields involving the diagnosis tes... We discuss regression analysis of current status data with the additive hazards model when the failure status may suffer misclassification.Such data occur commonly in many scientific fields involving the diagnosis test with imperfect sensitivity and specificity.In particular,we consider the situation where the sensitivity and specificity are known and propose a nonparametric maximum likelihood approach.For the implementation of the method,a novel EM algorithm is developed,and the asymptotic properties of the resulting estimators are established.Furthermore,the estimated regression parameters are shown to be semiparametrically efficient.We demonstrate the empirical performance of the proposed methodology in a simulation study and show its substantial advantages over the naive method.Also an application to a motivated study on chlamydia is provided. 展开更多
关键词 EM algorithm aximum likelihood estimation MISCLASSIFICATION Regression analysis
原文传递
Optimal subsampling for principal component analysis
7
作者 Xuehu Zhu Weixuan Yuan +1 位作者 Zongben Xu Wenlin Dai 《Science China Mathematics》 2025年第12期2993-3016,共24页
Principal component analysis(PCA)is ubiquitous in statistics and machine learning domains.It is frequently used as an intermediate procedure in various regression and classification problems to reduce the dimensionali... Principal component analysis(PCA)is ubiquitous in statistics and machine learning domains.It is frequently used as an intermediate procedure in various regression and classification problems to reduce the dimensionality of datasets.However,as the size of datasets becomes extremely large,direct application of PCA may not be feasible since loading and storing massive datasets may exceed the computational ability of common machines.To address this problem,subsampling is usually performed,in which a small proportion of the data is used as a surrogate of the entire dataset.This paper proposes an A-optimal subsampling algorithm to decrease the computational cost of PCA for super-large datasets.To be more specific,we establish the consistency and asymptotic normality of the eigenvectors of the subsampled covariance matrix.Subsequently,we derive the optimal subsampling probabilities for PCA based on the A-optimality criterion.We validate the theoretical results by conducting extensive simulation studies.Moreover,the proposed subsampling algorithm for PCA is embedded into a classification procedure for handwriting data to assess its effectiveness in real-world applications. 展开更多
关键词 big data dimensionality reduction optimal subsampling principal component analysis
原文传递
Regression Analysis of Interval-Censored Data with Informative Observation Times Under the Accelerated Failure Time Model 被引量:2
8
作者 ZHAO Shishun DONG Lijian SUN Jianguo 《Journal of Systems Science & Complexity》 SCIE EI CSCD 2022年第4期1520-1534,共15页
This paper discusses regression analysis of interval-censored failure time data arising from the accelerated failure time model in the presence of informative censoring.For the problem,a sieve maximum likelihood estim... This paper discusses regression analysis of interval-censored failure time data arising from the accelerated failure time model in the presence of informative censoring.For the problem,a sieve maximum likelihood estimation approach is proposed and in the method,the copula model is employed to describe the relationship between the failure time of interest and the censoring or observation process.Also I-spline functions are used to approximate the unknown functions in the model,and a simulation study is carried out to assess the finite sample performance of the proposed approach and suggests that it works well in practical situations.In addition,an illustrative example is provided. 展开更多
关键词 Accelerated failure time model copula models informative censoring interval-censored data splines
原文传递
Acceleration of the EM Algorithm Using the Vector Aitken Method and Its Steffensen Form 被引量:2
9
作者 Xu GUO Qiu-yue LI Wang-li XU 《Acta Mathematicae Applicatae Sinica》 SCIE CSCD 2017年第1期175-182,共8页
Based on Vector Aitken (VA) method, we propose an acceleration Expectation-Maximization (EM) algorithm, VA-accelerated EM algorithm, whose convergence speed is faster than that of EM algorithm. The VA-accelerated ... Based on Vector Aitken (VA) method, we propose an acceleration Expectation-Maximization (EM) algorithm, VA-accelerated EM algorithm, whose convergence speed is faster than that of EM algorithm. The VA-accelerated EM algorithm does not use the information matrix but only uses the sequence of estimates obtained from iterations of the EM algorithm, thus it keeps the flexibility and simplicity of the EM algorithm. Considering Steffensen iterative process, we have also given the Steffensen form of the VA-accelerated EM algorithm. It can be proved that the reform process is quadratic convergence. Numerical analysis illustrate the proposed methods are efficient and faster than EM algorithm. 展开更多
关键词 EM algorithm VA-accelerated EM algorithm convergence rate Steffensen iterative
原文传递
Regression Analysis of Misclassified Current Status Data with Informative Observation Times 被引量:2
10
作者 WANG Wenshan XU Da +1 位作者 ZHAO Shishun SUN Jianguo 《Journal of Systems Science & Complexity》 SCIE EI CSCD 2023年第3期1250-1264,共15页
Misclassified current status data arises if each study subject can only be observed once and the observation status is determined by a diagnostic test with imperfect sensitivity and specificity.For the situation,anoth... Misclassified current status data arises if each study subject can only be observed once and the observation status is determined by a diagnostic test with imperfect sensitivity and specificity.For the situation,another issue that may occur is that the observation time may be correlated with the interested failure time,which is often referred to as informative censoring or observation times.It is well-known that in the presence of informative censoring,the analysis that ignores it could yield biased or even misleading results.In this paper,the authors consider such data and propose a frailty-based inference procedure.In particular,an EM algorithm based on Poisson latent variables is developed and the asymptotic properties of the resulting estimators are established.The numerical results show that the proposed method works well in practice and an application to a set of real data is provided. 展开更多
关键词 Current status data EM algorithm informative censoring MISCLASSIFICATION proportional hazard model
原文传递
Empirical likelihood-based dimension reduction inference for linear error-in-responses models with validation study 被引量:2
11
作者 Wang Qihua Hardie Wolfgang 《Science China Mathematics》 SCIE 2004年第6期921-939,共19页
In this paper, linear errors-in-response models are considered in the presence of validation data on the responses. A semiparametric dimension reduction technique is employed to define an estimator of β with asymptot... In this paper, linear errors-in-response models are considered in the presence of validation data on the responses. A semiparametric dimension reduction technique is employed to define an estimator of β with asymptotic normality, the estimated empirical loglikelihoods and the adjusted empirical loglikelihoods for the vector of regression coefficients and linear combinations of the regression coefficients, respectively. The estimated empirical log-likelihoods are shown to be asymptotically distributed as weighted sums of independent X 2 1 and the adjusted empirical loglikelihoods are proved to be asymptotically distributed as standard chi-squares, respectively. 展开更多
关键词 confidence intervals error-in-response validation data
原文传递
A combined p-value test for the mean difference of high-dimensional data
12
作者 Wei Yu Wangli Xu Lixing Zhu 《Science China Mathematics》 SCIE CSCD 2019年第5期961-978,共18页
This paper proposes a novel method for testing the equality of high-dimensional means using a multiple hypothesis test. The proposed method is based on the maximum of standardized partial sums of logarithmic p-values ... This paper proposes a novel method for testing the equality of high-dimensional means using a multiple hypothesis test. The proposed method is based on the maximum of standardized partial sums of logarithmic p-values statistic. Numerical studies show that the method performs well for both normal and non-normal data and has a good power performance under both dense and sparse alternative hypotheses. For illustration, a real data analysis is implemented. 展开更多
关键词 HIGH-DIMENSIONAL data EQUALITY of means multiple HYPOTHESIS testing SPARSE alternatives
原文传递
Symmetrical Independence Tests for Two Random Vectors with Arbitrary Dimensional Graphs
13
作者 Jia Min LIU Gao Rong LI +1 位作者 Jian Qiang ZHANG Wang Li XU 《Acta Mathematica Sinica,English Series》 SCIE CSCD 2022年第4期662-682,共21页
Test of independence between random vectors X and Y is an essential task in statistical inference.One type of testing methods is based on the minimal spanning tree of variables X and Y.The main idea is to generate the... Test of independence between random vectors X and Y is an essential task in statistical inference.One type of testing methods is based on the minimal spanning tree of variables X and Y.The main idea is to generate the minimal spanning tree for one random vector X,and for each edges in minimal spanning tree,the corresponding rank number can be calculated based on another random vector Y.The resulting test statistics are constructed by these rank numbers.However,the existed statistics are not symmetrical tests about the random vectors X and Y such that the power performance from minimal spanning tree of X is not the same as that from minimal spanning tree of Y.In addition,the conclusion from minimal spanning tree of X might conflict with that from minimal spanning tree of Y.In order to solve these problems,we propose several symmetrical independence tests for X and Y.The exact distributions of test statistics are investigated when the sample size is small.Also,we study the asymptotic properties of the statistics.A permutation method is introduced for getting critical values of the statistics.Compared with the existing methods,our proposed methods are more efficient demonstrated by numerical analysis. 展开更多
关键词 Exact distribution minimal spanning tree asymptotic distribution symmetrical independence test
原文传递
Momentum Effect Differs Across Stock Performances:Chinese Evidence
14
作者 Zhao-yuan LI Si-bo LIU Mao-zai TIAN 《Acta Mathematicae Applicatae Sinica》 SCIE CSCD 2014年第2期279-288,共10页
Prior empirical studies find positive and negative momentum effect across the global nations, but few focus on explaining the mixed results. In order to address this issue, we apply the quantile regression approach to... Prior empirical studies find positive and negative momentum effect across the global nations, but few focus on explaining the mixed results. In order to address this issue, we apply the quantile regression approach to analyze the momentum effect in the context of Chinese stock market in this paper. The evidence suggests that the momentum effect in Chinese stock is not stable across firms with different levels of performance. We find that negative momentum effect in the short and medium horizon (3 months and 9 months) increases with the quantile of stock returns. And the positive momentum effect is observed in the long horizon (12 months), which also intensifies for the high performing stocks. According to our study, momentum effect needs to be examined on the basis of stock returns. OLS estimation, which gives an exclusive and biased result, provides misguiding intuitions for momentum effect across the global nations. Based on the empirical results of quantile regression, effective risk control strategies can also be inspired by adjusting the proportion of assets with past performances. 展开更多
关键词 chinese stock market investment strategy momentum effect quantile regression
原文传递
A penalized integrative deep neural network for variable selection among multiple omics datasets
15
作者 Yang Li Xiaonan Ren +2 位作者 Haochen Yu Tao Sun Shuangge Ma 《Quantitative Biology》 CAS CSCD 2024年第3期313-323,共11页
Deep learning has been increasingly popular in omics data analysis.Recent works incorporating variable selection into deep learning have greatly enhanced the model’s interpretability.However,because deep learning des... Deep learning has been increasingly popular in omics data analysis.Recent works incorporating variable selection into deep learning have greatly enhanced the model’s interpretability.However,because deep learning desires a large sample size,the existing methods may result in uncertain findings when the dataset has a small sample size,commonly seen in omics data analysis.With the explosion and availability of omics data from multiple populations/studies,the existing methods naively pool them into one dataset to enhance the sample size while ignoring that variable structures can differ across datasets,which might lead to inaccurate variable selection results.We propose a penalized integrative deep neural network(PIN)to simultaneously select important variables from multiple datasets.PIN directly aggregates multiple datasets as input and considers both homogeneity and heterogeneity situations among multiple datasets in an integrative analysis framework.Results from extensive simulation studies and applications of PIN to gene expression datasets from elders with different cognitive statuses or ovarian cancer patients at different stages demonstrate that PIN outperforms existing methods with considerably improved performance among multiple datasets.The source code is freely available on Github(rucliyang/PINFunc).We speculate that the proposed PIN method will promote the identification of disease-related important variables based on multiple studies/datasets from diverse origins. 展开更多
关键词 deep learning integrative analysis multiple omics datasets variable selection
原文传递
Optimal Timing of Business Conversion for Solvency Improvement
16
作者 Peng LI Ming ZHOU 《Acta Mathematicae Applicatae Sinica》 SCIE CSCD 2024年第3期744-757,共14页
In this paper,we study the optimal timing to convert the risk of business for an insurance company in order to improve its solvency.The cash flow of company evolves according to a jump-diffusion process.Business conve... In this paper,we study the optimal timing to convert the risk of business for an insurance company in order to improve its solvency.The cash flow of company evolves according to a jump-diffusion process.Business conversion option offers the company an opportunity to transfer the jump risk business out.In exchange for this option,the company needs to pay both fixed and proportional transaction costs.The proportional cost can also be seen as the profit loading of the jump risk business.We formulated this problem as an optimal stopping problem.By solving this stopping problem,we find that the optimal timing of business conversion mainly depends on the profit loading of the jump risk business.A larger profit loading would make the conversion option valueless.The fixed cost,however,only delays the optimal timing of business conversion.In the end,numerical results are provided to illustrate the impacts of transaction costs and environmental parameters to the optimal strategies. 展开更多
关键词 optimal stopping jump-diffusion process conversion option
原文传递
The large sample property of the iterative generalized least squares estimation for hierarchical mixed effects model
17
作者 Chunyu WANG Maozai TIAN 《Frontiers of Mathematics in China》 CSCD 2023年第5期327-339,共13页
In many fields, we need to deal with hierarchically structured data.For this kind of data, hierarchical mixed effects model can show the correlationof variables in the same level by establishing a model for regression... In many fields, we need to deal with hierarchically structured data.For this kind of data, hierarchical mixed effects model can show the correlationof variables in the same level by establishing a model for regression coefficients.Due to the complexity of the random part in this model, seeking an effectivemethod to estimate the covariance matrix is an appealing issue. Iterative generalizedleast squares estimation method was proposed by Goldstein in 1986 and wasapplied in special case of hierarchical model. In this paper, we extend themethod to the general hierarchical mixed effects model, derive its expressions indetail and apply it to economic examples. 展开更多
关键词 Hierarchical model iterative generalized least squaress estimation variance-covariance components maximum likelihood estimation
原文传递
Modifiable risk factors for esophageal cancer in endoscopic screening population:A modeling study
18
作者 Qian Zhang Fan Wang +6 位作者 Hao Feng Jie Xing Shengtao Zhu Hao Zhang Yang Li Wenqiang Wei Shutian Zhang 《Chinese Medical Journal》 SCIE CAS CSCD 2024年第3期350-352,共3页
To the Editor:Esophageal cancer,one of the most common cancer types in China,with an estimated 346,633 new cases and 323,600 deaths in 2022,is becoming an increasingly serious clinical and public health problem.^([1])... To the Editor:Esophageal cancer,one of the most common cancer types in China,with an estimated 346,633 new cases and 323,600 deaths in 2022,is becoming an increasingly serious clinical and public health problem.^([1])The successful promotion of the self-management strategy has indicated that lifestyle modifications can be valuable in the primary prevention of cancer development.Adopting a healthy lifestyle has become a novel strategy for primary prevention and risk reduction in high-risk areas.Previous epidemiological studies have identified several lifestyle-related risk factors for esophageal cancer,including smoking and diet.^([2])Each factor can typically explain a modest proportion of cancer risk.However,when combined,these known risk factors may substantially affect the risk of esophageal cancer.Nevertheless,some risk factors for esophageal cancer are non-modifiable,including age,low socioeconomic status,and family history.Whether and how these non-modifiable risk factors affect primary cancer prevention by intervening with modifiable risk factors remain unclear. 展开更多
关键词 ESOPHAGEAL PREVENTION PROMOTION
原文传递
A selective review on statistical methods for massive data computation:distributed computing,subsampling,and minibatch techniques
19
作者 Xuetong Li Yuan Gao +11 位作者 Hong Chang Danyang Huang Yingying Ma Rui Pan Haobo Qi Feifei Wang Shuyuan Wu Ke Xu Jing Zhou Xuening Zhu Yingqiu Zhu Hansheng Wang 《Statistical Theory and Related Fields》 CSCD 2024年第3期163-185,共23页
This paper presents a selective review of statistical computation methods for massive data analysis.A huge amount of statistical methods for massive data computation have been rapidly developed in the past decades.In ... This paper presents a selective review of statistical computation methods for massive data analysis.A huge amount of statistical methods for massive data computation have been rapidly developed in the past decades.In this work,we focus on three categories of statistical computation methods:(1)distributed computing,(2)subsampling methods,and(3)minibatch gradient techniques.The first class of literature is about distributed computing and focuses on the situation,where the dataset size is too huge to be comfortably handled by one single computer.In this case,a distributed computation system with multiple computers has to be utilized.The second class of literature is about subsampling methods and concerns about the situation,where the blacksample size of dataset is small enough to be placed on one single computer but too large to be easily processed by its memory as a whole.The last class of literature studies those minibatch gradient related optimization techniques,which have been extensively used for optimizing various deep learning models. 展开更多
关键词 Distributed computing massive data analysis minibatch techniques stochastic optimization subsampling methods
原文传递
Group Strong Orthogonal Arrays with Appealing Two-Dimensional Space-Filling Property
20
作者 WANG Chunyan YANG Jinyu 《Journal of Systems Science & Complexity》 2025年第4期1747-1765,共19页
Space-filling designs are popular for computer experiments.Therein space-filling designs with good two-dimensional projection are preferred as two-factor interactions are more likely to be important than three-or high... Space-filling designs are popular for computer experiments.Therein space-filling designs with good two-dimensional projection are preferred as two-factor interactions are more likely to be important than three-or higher-order interactions in practice.Considering two-dimensional projection,the authors propose a new class of designs called group strong orthogonal arrays.A group strong orthogonal array enjoys attractive two-dimensional space-filling property in the sense that it can be partitioned into groups,where any two columns can achieve stratifications on s^(u_(1))×s^(u_(2))grids for any positive integers u_(1),u_(2) with u_(1)+u_(2)=3,and any two columns from different groups can achieve stratifications on s^(v_(1))×s^(v_(2))grids for any positive integers v_(1),v_(2) with v_(1)+v_(2)=4.Few existing designs enjoy such a.ppealing two-dimensional stratification property in the literature.And the level numbers of the obtained designs can be s^(3)or s^(4).In addition to the attractive stratification property,the proposed designs perform very well under orthogonality and uniform projection criteria,and are flexible in run sizes,rendering them highly suitable for computer experiments. 展开更多
关键词 Computer experiment group strong orthogonal array ORTHOGONALITY UNIFORM
原文传递
上一页 1 2 3 下一页 到第
使用帮助 返回顶部