Random sample partition(RSP)is a newly developed big data representation and management model to deal with big data approximate computation problems.Academic research and practical applications have confirmed that RSP...Random sample partition(RSP)is a newly developed big data representation and management model to deal with big data approximate computation problems.Academic research and practical applications have confirmed that RSP is an efficient solution for big data processing and analysis.However,a challenge for implementing RSP is determining an appropriate sample size for RSP data blocks.While a large sample size increases the burden of big data computation,a small size will lead to insufficient distribution information for RSP data blocks.To address this problem,this paper presents a novel density estimation-based method(DEM)to determine the optimal sample size for RSP data blocks.First,a theoretical sample size is calculated based on the multivariate Dvoretzky-Kiefer-Wolfowitz(DKW)inequality by using the fixed-point iteration(FPI)method.Second,a practical sample size is determined by minimizing the validation error of a kernel density estimator(KDE)constructed on RSP data blocks for an increasing sample size.Finally,a series of persuasive experiments are conducted to validate the feasibility,rationality,and effectiveness of DEM.Experimental results show that(1)the iteration function of the FPI method is convergent for calculating the theoretical sample size from the multivariate DKW inequality;(2)the KDE constructed on RSP data blocks with sample size determined by DEM can yield a good approximation of the probability density function(p.d.f);and(3)DEM provides more accurate sample sizes than the existing sample size determination methods from the perspective of p.d.f.estimation.This demonstrates that DEM is a viable approach to deal with the sample size determination problem for big data RSP implementation.展开更多
We propose a new nonparametric approach to represent the linear dependence structure of a spatiotemporal process in terms of latent common factors.Though it is formally similar to the existing reduced rank approximati...We propose a new nonparametric approach to represent the linear dependence structure of a spatiotemporal process in terms of latent common factors.Though it is formally similar to the existing reduced rank approximation methods,the fundamental difference is that the low-dimensional structure is completely unknown in our setting,which is learned from the data collected irregularly over space but regularly in time.Furthermore,a graph Laplacian is incorporated in the learning in order to take the advantage of the continuity over space,and a new aggregation method via randomly partitioning space is introduced to improve the efficiency.We do not impose any stationarity conditions over space either,as the learning is facilitated by the stationarity in time.Krigings over space and time are carried out based on the learned low-dimensional structure,which is scalable to the cases when the data are taken over a large number of locations and/or over a long time period.Asymptotic properties of the proposed methods are established.An illustration with both simulated and real data sets is also reported.展开更多
基金This paper was supported by the National Natural Science Foundation of China(Grant No.61972261)the Natural Science Foundation of Guangdong Province(No.2023A1515011667)+1 种基金the Key Basic Research Foundation of Shenzhen(No.JCYJ20220818100205012)the Basic Research Foundation of Shenzhen(No.JCYJ20210324093609026)。
文摘Random sample partition(RSP)is a newly developed big data representation and management model to deal with big data approximate computation problems.Academic research and practical applications have confirmed that RSP is an efficient solution for big data processing and analysis.However,a challenge for implementing RSP is determining an appropriate sample size for RSP data blocks.While a large sample size increases the burden of big data computation,a small size will lead to insufficient distribution information for RSP data blocks.To address this problem,this paper presents a novel density estimation-based method(DEM)to determine the optimal sample size for RSP data blocks.First,a theoretical sample size is calculated based on the multivariate Dvoretzky-Kiefer-Wolfowitz(DKW)inequality by using the fixed-point iteration(FPI)method.Second,a practical sample size is determined by minimizing the validation error of a kernel density estimator(KDE)constructed on RSP data blocks for an increasing sample size.Finally,a series of persuasive experiments are conducted to validate the feasibility,rationality,and effectiveness of DEM.Experimental results show that(1)the iteration function of the FPI method is convergent for calculating the theoretical sample size from the multivariate DKW inequality;(2)the KDE constructed on RSP data blocks with sample size determined by DEM can yield a good approximation of the probability density function(p.d.f);and(3)DEM provides more accurate sample sizes than the existing sample size determination methods from the perspective of p.d.f.estimation.This demonstrates that DEM is a viable approach to deal with the sample size determination problem for big data RSP implementation.
基金supported by National Statistical Research Project of China(Grant No.2015LY77)National Natural Science Foundation of China(Grant Nos.11571080,11571081,71531006 and 71672042)+3 种基金supported by Engineering and Physical Sciences Research Council(Grant No.EP/L01226X/1)supported by National Natural Science Foundation of China(Grant Nos.11371318 and 11771390)Zhejiang Province Natural Science Foundation(Grant No.R16A010001)the Fundamental Research Funds for the Central Universities。
文摘We propose a new nonparametric approach to represent the linear dependence structure of a spatiotemporal process in terms of latent common factors.Though it is formally similar to the existing reduced rank approximation methods,the fundamental difference is that the low-dimensional structure is completely unknown in our setting,which is learned from the data collected irregularly over space but regularly in time.Furthermore,a graph Laplacian is incorporated in the learning in order to take the advantage of the continuity over space,and a new aggregation method via randomly partitioning space is introduced to improve the efficiency.We do not impose any stationarity conditions over space either,as the learning is facilitated by the stationarity in time.Krigings over space and time are carried out based on the learned low-dimensional structure,which is scalable to the cases when the data are taken over a large number of locations and/or over a long time period.Asymptotic properties of the proposed methods are established.An illustration with both simulated and real data sets is also reported.